00:00:00.001 Started by upstream project "autotest-per-patch" build number 132124 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.128 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.186 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.261 > git --version # 'git version 2.39.2' 00:00:00.261 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.973 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.985 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.997 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.997 > git config core.sparsecheckout # timeout=10 00:00:07.009 > git read-tree -mu HEAD # timeout=10 00:00:07.025 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.042 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.042 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.135 [Pipeline] Start of Pipeline 00:00:07.150 [Pipeline] library 00:00:07.152 Loading library shm_lib@master 00:00:07.152 Library shm_lib@master is cached. Copying from home. 00:00:07.170 [Pipeline] node 00:00:22.172 Still waiting to schedule task 00:00:22.172 Waiting for next available executor on ‘vagrant-vm-host’ 00:04:46.009 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:04:46.010 [Pipeline] { 00:04:46.022 [Pipeline] catchError 00:04:46.024 [Pipeline] { 00:04:46.039 [Pipeline] wrap 00:04:46.048 [Pipeline] { 00:04:46.056 [Pipeline] stage 00:04:46.057 [Pipeline] { (Prologue) 00:04:46.076 [Pipeline] echo 00:04:46.079 Node: VM-host-SM4 00:04:46.085 [Pipeline] cleanWs 00:04:46.098 [WS-CLEANUP] Deleting project workspace... 00:04:46.098 [WS-CLEANUP] Deferred wipeout is used... 00:04:46.104 [WS-CLEANUP] done 00:04:46.305 [Pipeline] setCustomBuildProperty 00:04:46.414 [Pipeline] httpRequest 00:04:46.821 [Pipeline] echo 00:04:46.823 Sorcerer 10.211.164.101 is alive 00:04:46.833 [Pipeline] retry 00:04:46.836 [Pipeline] { 00:04:46.850 [Pipeline] httpRequest 00:04:46.854 HttpMethod: GET 00:04:46.855 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:04:46.856 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:04:46.857 Response Code: HTTP/1.1 200 OK 00:04:46.857 Success: Status code 200 is in the accepted range: 200,404 00:04:46.858 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:04:47.003 [Pipeline] } 00:04:47.021 [Pipeline] // retry 00:04:47.030 [Pipeline] sh 00:04:47.311 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:04:47.325 [Pipeline] httpRequest 00:04:47.721 [Pipeline] echo 00:04:47.722 Sorcerer 10.211.164.101 is alive 00:04:47.729 [Pipeline] retry 00:04:47.730 [Pipeline] { 00:04:47.743 [Pipeline] httpRequest 00:04:47.747 HttpMethod: GET 00:04:47.748 URL: http://10.211.164.101/packages/spdk_40c30569f49919969115061354e3be897fd664bb.tar.gz 00:04:47.748 Sending request to url: http://10.211.164.101/packages/spdk_40c30569f49919969115061354e3be897fd664bb.tar.gz 00:04:47.750 Response Code: HTTP/1.1 200 OK 00:04:47.750 Success: Status code 200 is in the accepted range: 200,404 00:04:47.751 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_40c30569f49919969115061354e3be897fd664bb.tar.gz 00:04:50.036 [Pipeline] } 00:04:50.056 [Pipeline] // retry 00:04:50.064 [Pipeline] sh 00:04:50.346 + tar --no-same-owner -xf spdk_40c30569f49919969115061354e3be897fd664bb.tar.gz 00:04:53.637 [Pipeline] sh 00:04:53.919 + git -C spdk log --oneline -n5 00:04:53.919 40c30569f bdevperf: Add no_metadata option 00:04:53.919 3351abe6a bdevperf: Get metadata config by not bdev but bdev_desc 00:04:53.919 8f46604d4 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:04:53.919 8f724e636 bdev/malloc: Fix unexpected DIF verification error for initial read 00:04:53.919 184250893 dif: Set DIF field to 0 explicitly if its check is disabled 00:04:53.939 [Pipeline] writeFile 00:04:53.954 [Pipeline] sh 00:04:54.281 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:54.293 [Pipeline] sh 00:04:54.573 + cat autorun-spdk.conf 00:04:54.573 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:54.573 SPDK_TEST_NVME=1 00:04:54.573 SPDK_TEST_FTL=1 00:04:54.573 SPDK_TEST_ISAL=1 00:04:54.573 SPDK_RUN_ASAN=1 00:04:54.573 SPDK_RUN_UBSAN=1 00:04:54.573 SPDK_TEST_XNVME=1 00:04:54.573 SPDK_TEST_NVME_FDP=1 00:04:54.573 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:54.579 RUN_NIGHTLY=0 00:04:54.581 [Pipeline] } 00:04:54.593 [Pipeline] // stage 00:04:54.607 [Pipeline] stage 00:04:54.608 [Pipeline] { (Run VM) 00:04:54.619 [Pipeline] sh 00:04:54.899 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:54.899 + echo 'Start stage prepare_nvme.sh' 00:04:54.899 Start stage prepare_nvme.sh 00:04:54.899 + [[ -n 2 ]] 00:04:54.899 + disk_prefix=ex2 00:04:54.899 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:04:54.899 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:04:54.899 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:04:54.899 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:54.899 ++ SPDK_TEST_NVME=1 00:04:54.899 ++ SPDK_TEST_FTL=1 00:04:54.899 ++ SPDK_TEST_ISAL=1 00:04:54.899 ++ SPDK_RUN_ASAN=1 00:04:54.899 ++ SPDK_RUN_UBSAN=1 00:04:54.899 ++ SPDK_TEST_XNVME=1 00:04:54.899 ++ SPDK_TEST_NVME_FDP=1 00:04:54.899 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:54.899 ++ RUN_NIGHTLY=0 00:04:54.899 + cd /var/jenkins/workspace/nvme-vg-autotest 00:04:54.899 + nvme_files=() 00:04:54.899 + declare -A nvme_files 00:04:54.899 + backend_dir=/var/lib/libvirt/images/backends 00:04:54.899 + nvme_files['nvme.img']=5G 00:04:54.899 + nvme_files['nvme-cmb.img']=5G 00:04:54.899 + nvme_files['nvme-multi0.img']=4G 00:04:54.899 + nvme_files['nvme-multi1.img']=4G 00:04:54.899 + nvme_files['nvme-multi2.img']=4G 00:04:54.899 + nvme_files['nvme-openstack.img']=8G 00:04:54.899 + nvme_files['nvme-zns.img']=5G 00:04:54.899 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:54.899 + (( SPDK_TEST_FTL == 1 )) 00:04:54.899 + nvme_files["nvme-ftl.img"]=6G 00:04:54.899 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:54.899 + nvme_files["nvme-fdp.img"]=1G 00:04:54.899 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:54.899 + for nvme in "${!nvme_files[@]}" 00:04:54.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:04:54.899 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:54.899 + for nvme in "${!nvme_files[@]}" 00:04:54.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:04:54.899 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:04:54.899 + for nvme in "${!nvme_files[@]}" 00:04:54.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:04:54.899 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:54.899 + for nvme in "${!nvme_files[@]}" 00:04:54.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:04:54.899 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:54.899 + for nvme in "${!nvme_files[@]}" 00:04:54.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:04:54.899 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:55.157 + for nvme in "${!nvme_files[@]}" 00:04:55.157 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:04:55.157 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:55.157 + for nvme in "${!nvme_files[@]}" 00:04:55.157 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:04:55.157 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:55.157 + for nvme in "${!nvme_files[@]}" 00:04:55.157 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:04:55.157 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:04:55.157 + for nvme in "${!nvme_files[@]}" 00:04:55.157 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:04:55.416 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:55.416 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:04:55.416 + echo 'End stage prepare_nvme.sh' 00:04:55.416 End stage prepare_nvme.sh 00:04:55.427 [Pipeline] sh 00:04:55.706 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:55.706 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:04:55.964 00:04:55.964 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:04:55.964 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:04:55.964 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:04:55.964 HELP=0 00:04:55.964 DRY_RUN=0 00:04:55.964 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:04:55.964 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:04:55.964 NVME_AUTO_CREATE=0 00:04:55.964 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:04:55.964 NVME_CMB=,,,, 00:04:55.964 NVME_PMR=,,,, 00:04:55.964 NVME_ZNS=,,,, 00:04:55.964 NVME_MS=true,,,, 00:04:55.964 NVME_FDP=,,,on, 00:04:55.964 SPDK_VAGRANT_DISTRO=fedora39 00:04:55.964 SPDK_VAGRANT_VMCPU=10 00:04:55.964 SPDK_VAGRANT_VMRAM=12288 00:04:55.964 SPDK_VAGRANT_PROVIDER=libvirt 00:04:55.964 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:55.964 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:55.964 SPDK_OPENSTACK_NETWORK=0 00:04:55.964 VAGRANT_PACKAGE_BOX=0 00:04:55.964 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:55.964 FORCE_DISTRO=true 00:04:55.964 VAGRANT_BOX_VERSION= 00:04:55.964 EXTRA_VAGRANTFILES= 00:04:55.964 NIC_MODEL=e1000 00:04:55.964 00:04:55.964 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:04:55.964 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:04:59.345 Bringing machine 'default' up with 'libvirt' provider... 00:05:00.333 ==> default: Creating image (snapshot of base box volume). 00:05:00.591 ==> default: Creating domain with the following settings... 00:05:00.591 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730899853_5daa339e85eef4b9149f 00:05:00.591 ==> default: -- Domain type: kvm 00:05:00.591 ==> default: -- Cpus: 10 00:05:00.591 ==> default: -- Feature: acpi 00:05:00.591 ==> default: -- Feature: apic 00:05:00.591 ==> default: -- Feature: pae 00:05:00.591 ==> default: -- Memory: 12288M 00:05:00.591 ==> default: -- Memory Backing: hugepages: 00:05:00.591 ==> default: -- Management MAC: 00:05:00.591 ==> default: -- Loader: 00:05:00.591 ==> default: -- Nvram: 00:05:00.591 ==> default: -- Base box: spdk/fedora39 00:05:00.591 ==> default: -- Storage pool: default 00:05:00.591 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730899853_5daa339e85eef4b9149f.img (20G) 00:05:00.591 ==> default: -- Volume Cache: default 00:05:00.591 ==> default: -- Kernel: 00:05:00.591 ==> default: -- Initrd: 00:05:00.591 ==> default: -- Graphics Type: vnc 00:05:00.591 ==> default: -- Graphics Port: -1 00:05:00.591 ==> default: -- Graphics IP: 127.0.0.1 00:05:00.591 ==> default: -- Graphics Password: Not defined 00:05:00.591 ==> default: -- Video Type: cirrus 00:05:00.591 ==> default: -- Video VRAM: 9216 00:05:00.591 ==> default: -- Sound Type: 00:05:00.591 ==> default: -- Keymap: en-us 00:05:00.591 ==> default: -- TPM Path: 00:05:00.591 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:00.591 ==> default: -- Command line args: 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:00.591 ==> default: -> value=-drive, 00:05:00.591 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:00.591 ==> default: -> value=-drive, 00:05:00.591 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:00.591 ==> default: -> value=-drive, 00:05:00.591 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.591 ==> default: -> value=-drive, 00:05:00.591 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.591 ==> default: -> value=-drive, 00:05:00.591 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:00.591 ==> default: -> value=-drive, 00:05:00.591 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:00.591 ==> default: -> value=-device, 00:05:00.591 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:00.849 ==> default: Creating shared folders metadata... 00:05:00.849 ==> default: Starting domain. 00:05:02.224 ==> default: Waiting for domain to get an IP address... 00:05:20.437 ==> default: Waiting for SSH to become available... 00:05:20.437 ==> default: Configuring and enabling network interfaces... 00:05:24.641 default: SSH address: 192.168.121.192:22 00:05:24.641 default: SSH username: vagrant 00:05:24.641 default: SSH auth method: private key 00:05:27.173 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:35.286 ==> default: Mounting SSHFS shared folder... 00:05:37.184 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:37.184 ==> default: Checking Mount.. 00:05:38.559 ==> default: Folder Successfully Mounted! 00:05:38.559 ==> default: Running provisioner: file... 00:05:39.127 default: ~/.gitconfig => .gitconfig 00:05:39.693 00:05:39.694 SUCCESS! 00:05:39.694 00:05:39.694 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:05:39.694 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:39.694 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:05:39.694 00:05:39.703 [Pipeline] } 00:05:39.717 [Pipeline] // stage 00:05:39.730 [Pipeline] dir 00:05:39.730 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:05:39.732 [Pipeline] { 00:05:39.746 [Pipeline] catchError 00:05:39.748 [Pipeline] { 00:05:39.760 [Pipeline] sh 00:05:40.040 + vagrant ssh-config --host vagrant 00:05:40.040 + sed -ne /^Host/,$p 00:05:40.040 + tee ssh_conf 00:05:44.229 Host vagrant 00:05:44.229 HostName 192.168.121.192 00:05:44.229 User vagrant 00:05:44.229 Port 22 00:05:44.229 UserKnownHostsFile /dev/null 00:05:44.229 StrictHostKeyChecking no 00:05:44.229 PasswordAuthentication no 00:05:44.229 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:44.229 IdentitiesOnly yes 00:05:44.229 LogLevel FATAL 00:05:44.229 ForwardAgent yes 00:05:44.229 ForwardX11 yes 00:05:44.229 00:05:44.243 [Pipeline] withEnv 00:05:44.245 [Pipeline] { 00:05:44.263 [Pipeline] sh 00:05:44.591 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:44.591 source /etc/os-release 00:05:44.591 [[ -e /image.version ]] && img=$(< /image.version) 00:05:44.591 # Minimal, systemd-like check. 00:05:44.591 if [[ -e /.dockerenv ]]; then 00:05:44.591 # Clear garbage from the node's name: 00:05:44.591 # agt-er_autotest_547-896 -> autotest_547-896 00:05:44.591 # $HOSTNAME is the actual container id 00:05:44.591 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:44.591 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:44.591 # We can assume this is a mount from a host where container is running, 00:05:44.591 # so fetch its hostname to easily identify the target swarm worker. 00:05:44.591 container="$(< /etc/hostname) ($agent)" 00:05:44.591 else 00:05:44.591 # Fallback 00:05:44.591 container=$agent 00:05:44.591 fi 00:05:44.591 fi 00:05:44.591 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:44.591 00:05:44.603 [Pipeline] } 00:05:44.619 [Pipeline] // withEnv 00:05:44.628 [Pipeline] setCustomBuildProperty 00:05:44.643 [Pipeline] stage 00:05:44.645 [Pipeline] { (Tests) 00:05:44.666 [Pipeline] sh 00:05:44.949 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:45.222 [Pipeline] sh 00:05:45.501 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:45.776 [Pipeline] timeout 00:05:45.776 Timeout set to expire in 50 min 00:05:45.779 [Pipeline] { 00:05:45.809 [Pipeline] sh 00:05:46.113 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:46.729 HEAD is now at 40c30569f bdevperf: Add no_metadata option 00:05:46.740 [Pipeline] sh 00:05:47.018 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:47.289 [Pipeline] sh 00:05:47.568 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:47.842 [Pipeline] sh 00:05:48.122 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:05:48.381 ++ readlink -f spdk_repo 00:05:48.381 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:48.381 + [[ -n /home/vagrant/spdk_repo ]] 00:05:48.381 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:48.381 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:48.381 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:48.381 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:48.381 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:48.381 + [[ nvme-vg-autotest == pkgdep-* ]] 00:05:48.381 + cd /home/vagrant/spdk_repo 00:05:48.381 + source /etc/os-release 00:05:48.381 ++ NAME='Fedora Linux' 00:05:48.381 ++ VERSION='39 (Cloud Edition)' 00:05:48.381 ++ ID=fedora 00:05:48.381 ++ VERSION_ID=39 00:05:48.381 ++ VERSION_CODENAME= 00:05:48.381 ++ PLATFORM_ID=platform:f39 00:05:48.381 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:48.381 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:48.381 ++ LOGO=fedora-logo-icon 00:05:48.381 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:48.381 ++ HOME_URL=https://fedoraproject.org/ 00:05:48.381 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:48.381 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:48.381 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:48.381 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:48.381 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:48.381 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:48.381 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:48.381 ++ SUPPORT_END=2024-11-12 00:05:48.381 ++ VARIANT='Cloud Edition' 00:05:48.381 ++ VARIANT_ID=cloud 00:05:48.381 + uname -a 00:05:48.381 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:48.381 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:48.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.898 Hugepages 00:05:48.898 node hugesize free / total 00:05:49.156 node0 1048576kB 0 / 0 00:05:49.156 node0 2048kB 0 / 0 00:05:49.156 00:05:49.156 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:49.156 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:49.156 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:49.156 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:49.156 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:49.156 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:49.156 + rm -f /tmp/spdk-ld-path 00:05:49.157 + source autorun-spdk.conf 00:05:49.157 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:49.157 ++ SPDK_TEST_NVME=1 00:05:49.157 ++ SPDK_TEST_FTL=1 00:05:49.157 ++ SPDK_TEST_ISAL=1 00:05:49.157 ++ SPDK_RUN_ASAN=1 00:05:49.157 ++ SPDK_RUN_UBSAN=1 00:05:49.157 ++ SPDK_TEST_XNVME=1 00:05:49.157 ++ SPDK_TEST_NVME_FDP=1 00:05:49.157 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:49.157 ++ RUN_NIGHTLY=0 00:05:49.157 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:49.157 + [[ -n '' ]] 00:05:49.157 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:49.157 + for M in /var/spdk/build-*-manifest.txt 00:05:49.157 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:49.157 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:49.157 + for M in /var/spdk/build-*-manifest.txt 00:05:49.157 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:49.157 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:49.157 + for M in /var/spdk/build-*-manifest.txt 00:05:49.157 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:49.157 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:49.157 ++ uname 00:05:49.157 + [[ Linux == \L\i\n\u\x ]] 00:05:49.157 + sudo dmesg -T 00:05:49.157 + sudo dmesg --clear 00:05:49.416 + dmesg_pid=5304 00:05:49.416 + sudo dmesg -Tw 00:05:49.416 + [[ Fedora Linux == FreeBSD ]] 00:05:49.416 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.416 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.416 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:49.416 + [[ -x /usr/src/fio-static/fio ]] 00:05:49.416 + export FIO_BIN=/usr/src/fio-static/fio 00:05:49.416 + FIO_BIN=/usr/src/fio-static/fio 00:05:49.416 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:49.416 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:49.416 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:49.416 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:49.416 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:49.416 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:49.416 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:49.416 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:49.417 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:49.417 13:31:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:49.417 13:31:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:49.417 13:31:43 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:05:49.417 13:31:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:49.417 13:31:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:49.417 13:31:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:49.417 13:31:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.417 13:31:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:49.417 13:31:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:49.417 13:31:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.417 13:31:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.417 13:31:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.417 13:31:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.417 13:31:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.417 13:31:43 -- paths/export.sh@5 -- $ export PATH 00:05:49.417 13:31:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.417 13:31:43 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:49.417 13:31:43 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:49.417 13:31:43 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730899903.XXXXXX 00:05:49.417 13:31:43 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730899903.1Mm2KA 00:05:49.417 13:31:43 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:49.417 13:31:43 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:49.417 13:31:43 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:49.417 13:31:43 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:49.417 13:31:43 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:49.417 13:31:43 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:49.417 13:31:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:49.417 13:31:43 -- common/autotest_common.sh@10 -- $ set +x 00:05:49.417 13:31:43 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:05:49.417 13:31:43 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:49.417 13:31:43 -- pm/common@17 -- $ local monitor 00:05:49.417 13:31:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:49.417 13:31:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:49.417 13:31:43 -- pm/common@25 -- $ sleep 1 00:05:49.417 13:31:43 -- pm/common@21 -- $ date +%s 00:05:49.417 13:31:43 -- pm/common@21 -- $ date +%s 00:05:49.417 13:31:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730899903 00:05:49.417 13:31:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730899903 00:05:49.417 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730899903_collect-vmstat.pm.log 00:05:49.417 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730899903_collect-cpu-load.pm.log 00:05:50.354 13:31:44 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:50.354 13:31:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:50.354 13:31:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:50.354 13:31:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:50.354 13:31:44 -- spdk/autobuild.sh@16 -- $ date -u 00:05:50.354 Wed Nov 6 01:31:44 PM UTC 2024 00:05:50.354 13:31:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:50.612 v25.01-pre-192-g40c30569f 00:05:50.612 13:31:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:50.612 13:31:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:50.612 13:31:44 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:50.612 13:31:44 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:50.612 13:31:44 -- common/autotest_common.sh@10 -- $ set +x 00:05:50.612 ************************************ 00:05:50.612 START TEST asan 00:05:50.612 ************************************ 00:05:50.612 using asan 00:05:50.612 13:31:44 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:05:50.612 00:05:50.612 real 0m0.000s 00:05:50.612 user 0m0.000s 00:05:50.612 sys 0m0.000s 00:05:50.612 13:31:44 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:50.612 13:31:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:50.612 ************************************ 00:05:50.612 END TEST asan 00:05:50.612 ************************************ 00:05:50.612 13:31:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:50.612 13:31:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:50.612 13:31:44 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:50.612 13:31:44 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:50.612 13:31:44 -- common/autotest_common.sh@10 -- $ set +x 00:05:50.612 ************************************ 00:05:50.612 START TEST ubsan 00:05:50.612 ************************************ 00:05:50.612 using ubsan 00:05:50.612 13:31:44 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:05:50.612 00:05:50.612 real 0m0.000s 00:05:50.612 user 0m0.000s 00:05:50.612 sys 0m0.000s 00:05:50.612 13:31:44 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:50.612 13:31:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:50.612 ************************************ 00:05:50.612 END TEST ubsan 00:05:50.612 ************************************ 00:05:50.612 13:31:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:50.612 13:31:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:50.612 13:31:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:50.612 13:31:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:50.612 13:31:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:50.612 13:31:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:50.612 13:31:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:50.612 13:31:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:50.612 13:31:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:05:50.612 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:50.612 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:51.178 Using 'verbs' RDMA provider 00:06:07.463 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:19.703 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:19.961 Creating mk/config.mk...done. 00:06:19.961 Creating mk/cc.flags.mk...done. 00:06:19.961 Type 'make' to build. 00:06:19.961 13:32:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:19.961 13:32:13 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:19.961 13:32:13 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:19.961 13:32:13 -- common/autotest_common.sh@10 -- $ set +x 00:06:19.961 ************************************ 00:06:19.961 START TEST make 00:06:19.961 ************************************ 00:06:19.961 13:32:13 make -- common/autotest_common.sh@1127 -- $ make -j10 00:06:20.219 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:20.219 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:20.219 meson setup builddir \ 00:06:20.219 -Dwith-libaio=enabled \ 00:06:20.219 -Dwith-liburing=enabled \ 00:06:20.219 -Dwith-libvfn=disabled \ 00:06:20.219 -Dwith-spdk=disabled \ 00:06:20.219 -Dexamples=false \ 00:06:20.219 -Dtests=false \ 00:06:20.219 -Dtools=false && \ 00:06:20.219 meson compile -C builddir && \ 00:06:20.219 cd -) 00:06:20.219 make[1]: Nothing to be done for 'all'. 00:06:23.499 The Meson build system 00:06:23.499 Version: 1.5.0 00:06:23.499 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:23.499 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:23.499 Build type: native build 00:06:23.499 Project name: xnvme 00:06:23.499 Project version: 0.7.5 00:06:23.499 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:23.499 C linker for the host machine: cc ld.bfd 2.40-14 00:06:23.499 Host machine cpu family: x86_64 00:06:23.499 Host machine cpu: x86_64 00:06:23.499 Message: host_machine.system: linux 00:06:23.499 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:23.499 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:23.499 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:23.499 Run-time dependency threads found: YES 00:06:23.499 Has header "setupapi.h" : NO 00:06:23.499 Has header "linux/blkzoned.h" : YES 00:06:23.499 Has header "linux/blkzoned.h" : YES (cached) 00:06:23.499 Has header "libaio.h" : YES 00:06:23.499 Library aio found: YES 00:06:23.499 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:23.499 Run-time dependency liburing found: YES 2.2 00:06:23.499 Dependency libvfn skipped: feature with-libvfn disabled 00:06:23.499 Found CMake: /usr/bin/cmake (3.27.7) 00:06:23.499 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:23.499 Subproject spdk : skipped: feature with-spdk disabled 00:06:23.499 Run-time dependency appleframeworks found: NO (tried framework) 00:06:23.499 Run-time dependency appleframeworks found: NO (tried framework) 00:06:23.499 Library rt found: YES 00:06:23.499 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:23.499 Configuring xnvme_config.h using configuration 00:06:23.499 Configuring xnvme.spec using configuration 00:06:23.499 Run-time dependency bash-completion found: YES 2.11 00:06:23.499 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:23.499 Program cp found: YES (/usr/bin/cp) 00:06:23.499 Build targets in project: 3 00:06:23.499 00:06:23.499 xnvme 0.7.5 00:06:23.499 00:06:23.499 Subprojects 00:06:23.499 spdk : NO Feature 'with-spdk' disabled 00:06:23.499 00:06:23.499 User defined options 00:06:23.499 examples : false 00:06:23.499 tests : false 00:06:23.499 tools : false 00:06:23.499 with-libaio : enabled 00:06:23.499 with-liburing: enabled 00:06:23.499 with-libvfn : disabled 00:06:23.499 with-spdk : disabled 00:06:23.499 00:06:23.499 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:23.757 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:23.757 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:24.015 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:24.015 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:24.015 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:24.015 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:24.015 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:24.015 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:24.015 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:24.015 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:24.015 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:24.015 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:24.015 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:24.273 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:24.273 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:24.273 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:24.273 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:24.273 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:24.273 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:24.273 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:24.273 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:24.273 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:24.273 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:24.273 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:24.273 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:24.273 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:24.530 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:24.530 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:24.530 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:24.530 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:24.530 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:24.530 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:24.530 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:24.530 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:24.530 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:24.530 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:24.531 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:24.531 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:24.531 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:24.531 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:24.531 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:24.531 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:24.531 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:24.531 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:24.531 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:24.531 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:24.531 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:24.531 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:24.531 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:24.531 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:24.531 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:24.531 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:24.531 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:24.789 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:24.789 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:24.789 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:24.789 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:24.789 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:24.789 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:24.789 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:24.789 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:24.789 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:24.789 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:24.789 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:24.789 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:24.789 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:24.789 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:24.789 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:24.789 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:25.046 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:25.046 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:25.046 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:25.046 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:25.046 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:25.304 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:25.562 [75/76] Linking static target lib/libxnvme.a 00:06:25.562 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:25.562 INFO: autodetecting backend as ninja 00:06:25.562 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:25.562 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:06:35.538 The Meson build system 00:06:35.538 Version: 1.5.0 00:06:35.538 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:35.538 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:35.538 Build type: native build 00:06:35.538 Program cat found: YES (/usr/bin/cat) 00:06:35.538 Project name: DPDK 00:06:35.538 Project version: 24.03.0 00:06:35.538 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:35.538 C linker for the host machine: cc ld.bfd 2.40-14 00:06:35.538 Host machine cpu family: x86_64 00:06:35.538 Host machine cpu: x86_64 00:06:35.538 Message: ## Building in Developer Mode ## 00:06:35.538 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:35.538 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:35.538 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:35.538 Program python3 found: YES (/usr/bin/python3) 00:06:35.538 Program cat found: YES (/usr/bin/cat) 00:06:35.538 Compiler for C supports arguments -march=native: YES 00:06:35.538 Checking for size of "void *" : 8 00:06:35.538 Checking for size of "void *" : 8 (cached) 00:06:35.538 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:35.538 Library m found: YES 00:06:35.538 Library numa found: YES 00:06:35.538 Has header "numaif.h" : YES 00:06:35.538 Library fdt found: NO 00:06:35.538 Library execinfo found: NO 00:06:35.538 Has header "execinfo.h" : YES 00:06:35.538 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:35.538 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:35.538 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:35.538 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:35.538 Run-time dependency openssl found: YES 3.1.1 00:06:35.538 Run-time dependency libpcap found: YES 1.10.4 00:06:35.538 Has header "pcap.h" with dependency libpcap: YES 00:06:35.538 Compiler for C supports arguments -Wcast-qual: YES 00:06:35.538 Compiler for C supports arguments -Wdeprecated: YES 00:06:35.538 Compiler for C supports arguments -Wformat: YES 00:06:35.538 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:35.538 Compiler for C supports arguments -Wformat-security: NO 00:06:35.538 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:35.538 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:35.538 Compiler for C supports arguments -Wnested-externs: YES 00:06:35.538 Compiler for C supports arguments -Wold-style-definition: YES 00:06:35.538 Compiler for C supports arguments -Wpointer-arith: YES 00:06:35.538 Compiler for C supports arguments -Wsign-compare: YES 00:06:35.538 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:35.538 Compiler for C supports arguments -Wundef: YES 00:06:35.538 Compiler for C supports arguments -Wwrite-strings: YES 00:06:35.538 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:35.538 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:35.538 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:35.538 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:35.538 Program objdump found: YES (/usr/bin/objdump) 00:06:35.538 Compiler for C supports arguments -mavx512f: YES 00:06:35.538 Checking if "AVX512 checking" compiles: YES 00:06:35.538 Fetching value of define "__SSE4_2__" : 1 00:06:35.538 Fetching value of define "__AES__" : 1 00:06:35.538 Fetching value of define "__AVX__" : 1 00:06:35.538 Fetching value of define "__AVX2__" : 1 00:06:35.538 Fetching value of define "__AVX512BW__" : 1 00:06:35.538 Fetching value of define "__AVX512CD__" : 1 00:06:35.538 Fetching value of define "__AVX512DQ__" : 1 00:06:35.538 Fetching value of define "__AVX512F__" : 1 00:06:35.538 Fetching value of define "__AVX512VL__" : 1 00:06:35.538 Fetching value of define "__PCLMUL__" : 1 00:06:35.538 Fetching value of define "__RDRND__" : 1 00:06:35.538 Fetching value of define "__RDSEED__" : 1 00:06:35.538 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:35.538 Fetching value of define "__znver1__" : (undefined) 00:06:35.538 Fetching value of define "__znver2__" : (undefined) 00:06:35.538 Fetching value of define "__znver3__" : (undefined) 00:06:35.538 Fetching value of define "__znver4__" : (undefined) 00:06:35.538 Library asan found: YES 00:06:35.538 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:35.538 Message: lib/log: Defining dependency "log" 00:06:35.538 Message: lib/kvargs: Defining dependency "kvargs" 00:06:35.538 Message: lib/telemetry: Defining dependency "telemetry" 00:06:35.538 Library rt found: YES 00:06:35.538 Checking for function "getentropy" : NO 00:06:35.538 Message: lib/eal: Defining dependency "eal" 00:06:35.538 Message: lib/ring: Defining dependency "ring" 00:06:35.538 Message: lib/rcu: Defining dependency "rcu" 00:06:35.538 Message: lib/mempool: Defining dependency "mempool" 00:06:35.538 Message: lib/mbuf: Defining dependency "mbuf" 00:06:35.538 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:35.538 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:35.538 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:35.538 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:35.538 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:35.538 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:35.538 Compiler for C supports arguments -mpclmul: YES 00:06:35.538 Compiler for C supports arguments -maes: YES 00:06:35.538 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:35.538 Compiler for C supports arguments -mavx512bw: YES 00:06:35.538 Compiler for C supports arguments -mavx512dq: YES 00:06:35.538 Compiler for C supports arguments -mavx512vl: YES 00:06:35.538 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:35.538 Compiler for C supports arguments -mavx2: YES 00:06:35.538 Compiler for C supports arguments -mavx: YES 00:06:35.538 Message: lib/net: Defining dependency "net" 00:06:35.538 Message: lib/meter: Defining dependency "meter" 00:06:35.538 Message: lib/ethdev: Defining dependency "ethdev" 00:06:35.538 Message: lib/pci: Defining dependency "pci" 00:06:35.538 Message: lib/cmdline: Defining dependency "cmdline" 00:06:35.538 Message: lib/hash: Defining dependency "hash" 00:06:35.538 Message: lib/timer: Defining dependency "timer" 00:06:35.538 Message: lib/compressdev: Defining dependency "compressdev" 00:06:35.538 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:35.538 Message: lib/dmadev: Defining dependency "dmadev" 00:06:35.538 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:35.538 Message: lib/power: Defining dependency "power" 00:06:35.538 Message: lib/reorder: Defining dependency "reorder" 00:06:35.538 Message: lib/security: Defining dependency "security" 00:06:35.538 Has header "linux/userfaultfd.h" : YES 00:06:35.538 Has header "linux/vduse.h" : YES 00:06:35.538 Message: lib/vhost: Defining dependency "vhost" 00:06:35.538 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:35.538 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:35.538 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:35.538 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:35.538 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:35.538 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:35.538 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:35.538 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:35.538 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:35.538 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:35.538 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:35.539 Configuring doxy-api-html.conf using configuration 00:06:35.539 Configuring doxy-api-man.conf using configuration 00:06:35.539 Program mandb found: YES (/usr/bin/mandb) 00:06:35.539 Program sphinx-build found: NO 00:06:35.539 Configuring rte_build_config.h using configuration 00:06:35.539 Message: 00:06:35.539 ================= 00:06:35.539 Applications Enabled 00:06:35.539 ================= 00:06:35.539 00:06:35.539 apps: 00:06:35.539 00:06:35.539 00:06:35.539 Message: 00:06:35.539 ================= 00:06:35.539 Libraries Enabled 00:06:35.539 ================= 00:06:35.539 00:06:35.539 libs: 00:06:35.539 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:35.539 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:35.539 cryptodev, dmadev, power, reorder, security, vhost, 00:06:35.539 00:06:35.539 Message: 00:06:35.539 =============== 00:06:35.539 Drivers Enabled 00:06:35.539 =============== 00:06:35.539 00:06:35.539 common: 00:06:35.539 00:06:35.539 bus: 00:06:35.539 pci, vdev, 00:06:35.539 mempool: 00:06:35.539 ring, 00:06:35.539 dma: 00:06:35.539 00:06:35.539 net: 00:06:35.539 00:06:35.539 crypto: 00:06:35.539 00:06:35.539 compress: 00:06:35.539 00:06:35.539 vdpa: 00:06:35.539 00:06:35.539 00:06:35.539 Message: 00:06:35.539 ================= 00:06:35.539 Content Skipped 00:06:35.539 ================= 00:06:35.539 00:06:35.539 apps: 00:06:35.539 dumpcap: explicitly disabled via build config 00:06:35.539 graph: explicitly disabled via build config 00:06:35.539 pdump: explicitly disabled via build config 00:06:35.539 proc-info: explicitly disabled via build config 00:06:35.539 test-acl: explicitly disabled via build config 00:06:35.539 test-bbdev: explicitly disabled via build config 00:06:35.539 test-cmdline: explicitly disabled via build config 00:06:35.539 test-compress-perf: explicitly disabled via build config 00:06:35.539 test-crypto-perf: explicitly disabled via build config 00:06:35.539 test-dma-perf: explicitly disabled via build config 00:06:35.539 test-eventdev: explicitly disabled via build config 00:06:35.539 test-fib: explicitly disabled via build config 00:06:35.539 test-flow-perf: explicitly disabled via build config 00:06:35.539 test-gpudev: explicitly disabled via build config 00:06:35.539 test-mldev: explicitly disabled via build config 00:06:35.539 test-pipeline: explicitly disabled via build config 00:06:35.539 test-pmd: explicitly disabled via build config 00:06:35.539 test-regex: explicitly disabled via build config 00:06:35.539 test-sad: explicitly disabled via build config 00:06:35.539 test-security-perf: explicitly disabled via build config 00:06:35.539 00:06:35.539 libs: 00:06:35.539 argparse: explicitly disabled via build config 00:06:35.539 metrics: explicitly disabled via build config 00:06:35.539 acl: explicitly disabled via build config 00:06:35.539 bbdev: explicitly disabled via build config 00:06:35.539 bitratestats: explicitly disabled via build config 00:06:35.539 bpf: explicitly disabled via build config 00:06:35.539 cfgfile: explicitly disabled via build config 00:06:35.539 distributor: explicitly disabled via build config 00:06:35.539 efd: explicitly disabled via build config 00:06:35.539 eventdev: explicitly disabled via build config 00:06:35.539 dispatcher: explicitly disabled via build config 00:06:35.539 gpudev: explicitly disabled via build config 00:06:35.539 gro: explicitly disabled via build config 00:06:35.539 gso: explicitly disabled via build config 00:06:35.539 ip_frag: explicitly disabled via build config 00:06:35.539 jobstats: explicitly disabled via build config 00:06:35.539 latencystats: explicitly disabled via build config 00:06:35.539 lpm: explicitly disabled via build config 00:06:35.539 member: explicitly disabled via build config 00:06:35.539 pcapng: explicitly disabled via build config 00:06:35.539 rawdev: explicitly disabled via build config 00:06:35.539 regexdev: explicitly disabled via build config 00:06:35.539 mldev: explicitly disabled via build config 00:06:35.539 rib: explicitly disabled via build config 00:06:35.539 sched: explicitly disabled via build config 00:06:35.539 stack: explicitly disabled via build config 00:06:35.539 ipsec: explicitly disabled via build config 00:06:35.539 pdcp: explicitly disabled via build config 00:06:35.539 fib: explicitly disabled via build config 00:06:35.539 port: explicitly disabled via build config 00:06:35.539 pdump: explicitly disabled via build config 00:06:35.539 table: explicitly disabled via build config 00:06:35.539 pipeline: explicitly disabled via build config 00:06:35.539 graph: explicitly disabled via build config 00:06:35.539 node: explicitly disabled via build config 00:06:35.539 00:06:35.539 drivers: 00:06:35.539 common/cpt: not in enabled drivers build config 00:06:35.539 common/dpaax: not in enabled drivers build config 00:06:35.539 common/iavf: not in enabled drivers build config 00:06:35.539 common/idpf: not in enabled drivers build config 00:06:35.539 common/ionic: not in enabled drivers build config 00:06:35.539 common/mvep: not in enabled drivers build config 00:06:35.539 common/octeontx: not in enabled drivers build config 00:06:35.539 bus/auxiliary: not in enabled drivers build config 00:06:35.539 bus/cdx: not in enabled drivers build config 00:06:35.539 bus/dpaa: not in enabled drivers build config 00:06:35.539 bus/fslmc: not in enabled drivers build config 00:06:35.539 bus/ifpga: not in enabled drivers build config 00:06:35.539 bus/platform: not in enabled drivers build config 00:06:35.539 bus/uacce: not in enabled drivers build config 00:06:35.539 bus/vmbus: not in enabled drivers build config 00:06:35.539 common/cnxk: not in enabled drivers build config 00:06:35.539 common/mlx5: not in enabled drivers build config 00:06:35.539 common/nfp: not in enabled drivers build config 00:06:35.539 common/nitrox: not in enabled drivers build config 00:06:35.539 common/qat: not in enabled drivers build config 00:06:35.539 common/sfc_efx: not in enabled drivers build config 00:06:35.539 mempool/bucket: not in enabled drivers build config 00:06:35.539 mempool/cnxk: not in enabled drivers build config 00:06:35.539 mempool/dpaa: not in enabled drivers build config 00:06:35.539 mempool/dpaa2: not in enabled drivers build config 00:06:35.539 mempool/octeontx: not in enabled drivers build config 00:06:35.539 mempool/stack: not in enabled drivers build config 00:06:35.539 dma/cnxk: not in enabled drivers build config 00:06:35.539 dma/dpaa: not in enabled drivers build config 00:06:35.539 dma/dpaa2: not in enabled drivers build config 00:06:35.539 dma/hisilicon: not in enabled drivers build config 00:06:35.539 dma/idxd: not in enabled drivers build config 00:06:35.539 dma/ioat: not in enabled drivers build config 00:06:35.539 dma/skeleton: not in enabled drivers build config 00:06:35.539 net/af_packet: not in enabled drivers build config 00:06:35.539 net/af_xdp: not in enabled drivers build config 00:06:35.539 net/ark: not in enabled drivers build config 00:06:35.539 net/atlantic: not in enabled drivers build config 00:06:35.539 net/avp: not in enabled drivers build config 00:06:35.539 net/axgbe: not in enabled drivers build config 00:06:35.539 net/bnx2x: not in enabled drivers build config 00:06:35.539 net/bnxt: not in enabled drivers build config 00:06:35.539 net/bonding: not in enabled drivers build config 00:06:35.539 net/cnxk: not in enabled drivers build config 00:06:35.539 net/cpfl: not in enabled drivers build config 00:06:35.539 net/cxgbe: not in enabled drivers build config 00:06:35.539 net/dpaa: not in enabled drivers build config 00:06:35.539 net/dpaa2: not in enabled drivers build config 00:06:35.539 net/e1000: not in enabled drivers build config 00:06:35.539 net/ena: not in enabled drivers build config 00:06:35.539 net/enetc: not in enabled drivers build config 00:06:35.539 net/enetfec: not in enabled drivers build config 00:06:35.539 net/enic: not in enabled drivers build config 00:06:35.539 net/failsafe: not in enabled drivers build config 00:06:35.539 net/fm10k: not in enabled drivers build config 00:06:35.539 net/gve: not in enabled drivers build config 00:06:35.539 net/hinic: not in enabled drivers build config 00:06:35.539 net/hns3: not in enabled drivers build config 00:06:35.539 net/i40e: not in enabled drivers build config 00:06:35.539 net/iavf: not in enabled drivers build config 00:06:35.539 net/ice: not in enabled drivers build config 00:06:35.539 net/idpf: not in enabled drivers build config 00:06:35.539 net/igc: not in enabled drivers build config 00:06:35.539 net/ionic: not in enabled drivers build config 00:06:35.539 net/ipn3ke: not in enabled drivers build config 00:06:35.539 net/ixgbe: not in enabled drivers build config 00:06:35.539 net/mana: not in enabled drivers build config 00:06:35.539 net/memif: not in enabled drivers build config 00:06:35.539 net/mlx4: not in enabled drivers build config 00:06:35.539 net/mlx5: not in enabled drivers build config 00:06:35.539 net/mvneta: not in enabled drivers build config 00:06:35.539 net/mvpp2: not in enabled drivers build config 00:06:35.539 net/netvsc: not in enabled drivers build config 00:06:35.539 net/nfb: not in enabled drivers build config 00:06:35.539 net/nfp: not in enabled drivers build config 00:06:35.539 net/ngbe: not in enabled drivers build config 00:06:35.539 net/null: not in enabled drivers build config 00:06:35.539 net/octeontx: not in enabled drivers build config 00:06:35.539 net/octeon_ep: not in enabled drivers build config 00:06:35.539 net/pcap: not in enabled drivers build config 00:06:35.539 net/pfe: not in enabled drivers build config 00:06:35.539 net/qede: not in enabled drivers build config 00:06:35.539 net/ring: not in enabled drivers build config 00:06:35.539 net/sfc: not in enabled drivers build config 00:06:35.539 net/softnic: not in enabled drivers build config 00:06:35.539 net/tap: not in enabled drivers build config 00:06:35.539 net/thunderx: not in enabled drivers build config 00:06:35.539 net/txgbe: not in enabled drivers build config 00:06:35.539 net/vdev_netvsc: not in enabled drivers build config 00:06:35.539 net/vhost: not in enabled drivers build config 00:06:35.539 net/virtio: not in enabled drivers build config 00:06:35.540 net/vmxnet3: not in enabled drivers build config 00:06:35.540 raw/*: missing internal dependency, "rawdev" 00:06:35.540 crypto/armv8: not in enabled drivers build config 00:06:35.540 crypto/bcmfs: not in enabled drivers build config 00:06:35.540 crypto/caam_jr: not in enabled drivers build config 00:06:35.540 crypto/ccp: not in enabled drivers build config 00:06:35.540 crypto/cnxk: not in enabled drivers build config 00:06:35.540 crypto/dpaa_sec: not in enabled drivers build config 00:06:35.540 crypto/dpaa2_sec: not in enabled drivers build config 00:06:35.540 crypto/ipsec_mb: not in enabled drivers build config 00:06:35.540 crypto/mlx5: not in enabled drivers build config 00:06:35.540 crypto/mvsam: not in enabled drivers build config 00:06:35.540 crypto/nitrox: not in enabled drivers build config 00:06:35.540 crypto/null: not in enabled drivers build config 00:06:35.540 crypto/octeontx: not in enabled drivers build config 00:06:35.540 crypto/openssl: not in enabled drivers build config 00:06:35.540 crypto/scheduler: not in enabled drivers build config 00:06:35.540 crypto/uadk: not in enabled drivers build config 00:06:35.540 crypto/virtio: not in enabled drivers build config 00:06:35.540 compress/isal: not in enabled drivers build config 00:06:35.540 compress/mlx5: not in enabled drivers build config 00:06:35.540 compress/nitrox: not in enabled drivers build config 00:06:35.540 compress/octeontx: not in enabled drivers build config 00:06:35.540 compress/zlib: not in enabled drivers build config 00:06:35.540 regex/*: missing internal dependency, "regexdev" 00:06:35.540 ml/*: missing internal dependency, "mldev" 00:06:35.540 vdpa/ifc: not in enabled drivers build config 00:06:35.540 vdpa/mlx5: not in enabled drivers build config 00:06:35.540 vdpa/nfp: not in enabled drivers build config 00:06:35.540 vdpa/sfc: not in enabled drivers build config 00:06:35.540 event/*: missing internal dependency, "eventdev" 00:06:35.540 baseband/*: missing internal dependency, "bbdev" 00:06:35.540 gpu/*: missing internal dependency, "gpudev" 00:06:35.540 00:06:35.540 00:06:35.540 Build targets in project: 85 00:06:35.540 00:06:35.540 DPDK 24.03.0 00:06:35.540 00:06:35.540 User defined options 00:06:35.540 buildtype : debug 00:06:35.540 default_library : shared 00:06:35.540 libdir : lib 00:06:35.540 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:35.540 b_sanitize : address 00:06:35.540 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:35.540 c_link_args : 00:06:35.540 cpu_instruction_set: native 00:06:35.540 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:35.540 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:35.540 enable_docs : false 00:06:35.540 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:35.540 enable_kmods : false 00:06:35.540 max_lcores : 128 00:06:35.540 tests : false 00:06:35.540 00:06:35.540 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:36.106 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:36.106 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:36.106 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:36.106 [3/268] Linking static target lib/librte_kvargs.a 00:06:36.364 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:36.364 [5/268] Linking static target lib/librte_log.a 00:06:36.364 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:36.621 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:36.621 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:36.878 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:36.878 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:36.878 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:36.878 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:36.878 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.878 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:37.135 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:37.135 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:37.135 [17/268] Linking static target lib/librte_telemetry.a 00:06:37.135 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:37.393 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:37.393 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:37.393 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:37.393 [22/268] Linking target lib/librte_log.so.24.1 00:06:37.393 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:37.651 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:37.651 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:37.651 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:37.651 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:37.651 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:37.651 [29/268] Linking target lib/librte_kvargs.so.24.1 00:06:37.942 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:37.942 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:37.942 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:37.942 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:37.942 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:38.200 [35/268] Linking target lib/librte_telemetry.so.24.1 00:06:38.200 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:38.201 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:38.201 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:38.458 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:38.458 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:38.458 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:38.458 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:38.458 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:38.458 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:38.458 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:38.716 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:38.974 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:38.974 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:38.974 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:38.974 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:38.974 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:38.974 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:38.974 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:39.233 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:39.491 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:39.491 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:39.491 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:39.748 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:39.748 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:39.748 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:39.748 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:39.748 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:39.748 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:40.007 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:40.007 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:40.265 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:40.524 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:40.524 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:40.524 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:40.524 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:40.524 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:40.782 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:40.782 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:40.782 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:40.782 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:40.782 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:40.782 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:40.782 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:41.040 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:41.040 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:41.040 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:41.040 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:41.298 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:41.298 [84/268] Linking static target lib/librte_ring.a 00:06:41.298 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:41.556 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:41.556 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:41.556 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:41.556 [89/268] Linking static target lib/librte_rcu.a 00:06:41.556 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:41.556 [91/268] Linking static target lib/librte_eal.a 00:06:41.814 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:41.814 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:41.814 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:41.814 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.814 [96/268] Linking static target lib/librte_mempool.a 00:06:42.073 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:42.073 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:42.073 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:42.073 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.331 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:42.331 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:42.589 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:42.589 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:42.589 [105/268] Linking static target lib/librte_mbuf.a 00:06:42.589 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:42.589 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:42.589 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:42.847 [109/268] Linking static target lib/librte_net.a 00:06:42.847 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:42.847 [111/268] Linking static target lib/librte_meter.a 00:06:43.105 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:43.105 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:43.105 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:43.105 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:43.361 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.361 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.361 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.619 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:43.619 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.876 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:44.135 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:44.135 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:44.135 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:44.393 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:44.393 [126/268] Linking static target lib/librte_pci.a 00:06:44.393 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:44.393 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:44.393 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:44.393 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:44.393 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:44.393 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:44.651 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:44.651 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:44.651 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:44.651 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:44.651 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:44.909 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:44.909 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:44.909 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:44.909 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:44.909 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:44.909 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:44.909 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:44.909 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:45.168 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:45.168 [147/268] Linking static target lib/librte_cmdline.a 00:06:45.168 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:45.168 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:45.427 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:45.427 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:45.685 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:45.685 [153/268] Linking static target lib/librte_timer.a 00:06:45.942 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:45.942 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:45.942 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:45.942 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:46.199 [158/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:46.199 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:46.199 [160/268] Linking static target lib/librte_compressdev.a 00:06:46.199 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:46.457 [162/268] Linking static target lib/librte_ethdev.a 00:06:46.457 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:46.457 [164/268] Linking static target lib/librte_hash.a 00:06:46.457 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:46.457 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:46.714 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:46.714 [168/268] Linking static target lib/librte_dmadev.a 00:06:46.714 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:46.714 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:46.714 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:47.279 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:47.279 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:47.279 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:47.538 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:47.815 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:47.815 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:47.815 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:47.815 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:48.073 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:48.073 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:48.073 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:48.638 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:48.638 [184/268] Linking static target lib/librte_cryptodev.a 00:06:48.638 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:48.638 [186/268] Linking static target lib/librte_reorder.a 00:06:48.638 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:48.896 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:48.896 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:48.896 [190/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:48.896 [191/268] Linking static target lib/librte_power.a 00:06:49.153 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:49.153 [193/268] Linking static target lib/librte_security.a 00:06:49.411 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:49.976 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:50.233 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:50.233 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:50.233 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:50.491 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:50.491 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:51.083 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:51.083 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:51.083 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:51.083 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:51.341 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:51.341 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:51.599 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:51.599 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:51.599 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:51.599 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:51.599 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.858 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:51.858 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:51.858 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:51.858 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:51.858 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:51.859 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:51.859 [218/268] Linking static target drivers/librte_bus_vdev.a 00:06:51.859 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:51.859 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:52.117 [221/268] Linking static target drivers/librte_bus_pci.a 00:06:52.117 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:52.117 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:52.117 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:52.117 [225/268] Linking static target drivers/librte_mempool_ring.a 00:06:52.375 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:52.632 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:52.890 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:54.876 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.134 [230/268] Linking target lib/librte_eal.so.24.1 00:06:55.134 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:55.392 [232/268] Linking target lib/librte_meter.so.24.1 00:06:55.392 [233/268] Linking target lib/librte_ring.so.24.1 00:06:55.392 [234/268] Linking target lib/librte_timer.so.24.1 00:06:55.392 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:55.392 [236/268] Linking target lib/librte_dmadev.so.24.1 00:06:55.392 [237/268] Linking target lib/librte_pci.so.24.1 00:06:55.392 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:55.392 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:55.392 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:55.392 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:55.392 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:55.649 [243/268] Linking target lib/librte_rcu.so.24.1 00:06:55.649 [244/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.649 [245/268] Linking target lib/librte_mempool.so.24.1 00:06:55.649 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:55.649 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:55.649 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:55.907 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:55.907 [250/268] Linking target lib/librte_mbuf.so.24.1 00:06:55.907 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:56.165 [252/268] Linking target lib/librte_net.so.24.1 00:06:56.165 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:06:56.165 [254/268] Linking target lib/librte_reorder.so.24.1 00:06:56.165 [255/268] Linking target lib/librte_compressdev.so.24.1 00:06:56.165 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:56.165 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:56.165 [258/268] Linking target lib/librte_cmdline.so.24.1 00:06:56.165 [259/268] Linking target lib/librte_hash.so.24.1 00:06:56.165 [260/268] Linking target lib/librte_security.so.24.1 00:06:56.422 [261/268] Linking target lib/librte_ethdev.so.24.1 00:06:56.422 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:56.422 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:56.680 [264/268] Linking target lib/librte_power.so.24.1 00:06:57.614 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:57.872 [266/268] Linking static target lib/librte_vhost.a 00:06:59.780 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:59.780 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:59.780 INFO: autodetecting backend as ninja 00:06:59.780 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:17.859 CC lib/ut_mock/mock.o 00:07:17.859 CC lib/log/log.o 00:07:17.859 CC lib/log/log_deprecated.o 00:07:17.859 CC lib/log/log_flags.o 00:07:17.859 CC lib/ut/ut.o 00:07:18.117 LIB libspdk_ut_mock.a 00:07:18.117 SO libspdk_ut_mock.so.6.0 00:07:18.117 LIB libspdk_log.a 00:07:18.117 LIB libspdk_ut.a 00:07:18.117 SO libspdk_log.so.7.1 00:07:18.117 SYMLINK libspdk_ut_mock.so 00:07:18.117 SO libspdk_ut.so.2.0 00:07:18.117 SYMLINK libspdk_log.so 00:07:18.117 SYMLINK libspdk_ut.so 00:07:18.375 CXX lib/trace_parser/trace.o 00:07:18.375 CC lib/dma/dma.o 00:07:18.375 CC lib/util/bit_array.o 00:07:18.375 CC lib/util/crc16.o 00:07:18.375 CC lib/util/cpuset.o 00:07:18.375 CC lib/ioat/ioat.o 00:07:18.375 CC lib/util/crc32.o 00:07:18.375 CC lib/util/crc32c.o 00:07:18.375 CC lib/util/base64.o 00:07:18.634 CC lib/vfio_user/host/vfio_user_pci.o 00:07:18.634 CC lib/util/crc32_ieee.o 00:07:18.634 CC lib/util/crc64.o 00:07:18.634 CC lib/util/dif.o 00:07:18.634 CC lib/util/fd.o 00:07:18.634 CC lib/util/fd_group.o 00:07:18.634 LIB libspdk_dma.a 00:07:18.634 CC lib/util/file.o 00:07:18.634 CC lib/util/hexlify.o 00:07:18.634 SO libspdk_dma.so.5.0 00:07:18.892 CC lib/vfio_user/host/vfio_user.o 00:07:18.892 SYMLINK libspdk_dma.so 00:07:18.892 CC lib/util/iov.o 00:07:18.892 LIB libspdk_ioat.a 00:07:18.892 SO libspdk_ioat.so.7.0 00:07:18.892 CC lib/util/math.o 00:07:18.892 CC lib/util/net.o 00:07:18.892 SYMLINK libspdk_ioat.so 00:07:18.892 CC lib/util/pipe.o 00:07:18.892 CC lib/util/strerror_tls.o 00:07:18.892 CC lib/util/string.o 00:07:19.150 CC lib/util/uuid.o 00:07:19.150 LIB libspdk_vfio_user.a 00:07:19.150 CC lib/util/xor.o 00:07:19.150 CC lib/util/zipf.o 00:07:19.150 CC lib/util/md5.o 00:07:19.150 SO libspdk_vfio_user.so.5.0 00:07:19.150 SYMLINK libspdk_vfio_user.so 00:07:19.408 LIB libspdk_util.a 00:07:19.666 SO libspdk_util.so.10.1 00:07:19.666 LIB libspdk_trace_parser.a 00:07:19.924 SYMLINK libspdk_util.so 00:07:19.924 SO libspdk_trace_parser.so.6.0 00:07:19.924 SYMLINK libspdk_trace_parser.so 00:07:19.924 CC lib/json/json_parse.o 00:07:19.924 CC lib/json/json_util.o 00:07:19.924 CC lib/json/json_write.o 00:07:19.924 CC lib/rdma_utils/rdma_utils.o 00:07:19.924 CC lib/env_dpdk/env.o 00:07:19.924 CC lib/vmd/led.o 00:07:19.924 CC lib/vmd/vmd.o 00:07:19.924 CC lib/conf/conf.o 00:07:19.924 CC lib/env_dpdk/memory.o 00:07:19.924 CC lib/idxd/idxd.o 00:07:20.181 CC lib/env_dpdk/pci.o 00:07:20.181 LIB libspdk_conf.a 00:07:20.181 CC lib/idxd/idxd_user.o 00:07:20.181 CC lib/idxd/idxd_kernel.o 00:07:20.181 SO libspdk_conf.so.6.0 00:07:20.439 LIB libspdk_rdma_utils.a 00:07:20.439 LIB libspdk_json.a 00:07:20.439 SO libspdk_rdma_utils.so.1.0 00:07:20.439 SO libspdk_json.so.6.0 00:07:20.439 SYMLINK libspdk_conf.so 00:07:20.439 CC lib/env_dpdk/init.o 00:07:20.439 SYMLINK libspdk_rdma_utils.so 00:07:20.439 SYMLINK libspdk_json.so 00:07:20.439 CC lib/env_dpdk/threads.o 00:07:20.439 CC lib/env_dpdk/pci_ioat.o 00:07:20.439 CC lib/env_dpdk/pci_virtio.o 00:07:20.697 CC lib/env_dpdk/pci_vmd.o 00:07:20.697 CC lib/env_dpdk/pci_idxd.o 00:07:20.697 CC lib/rdma_provider/common.o 00:07:20.697 CC lib/env_dpdk/pci_event.o 00:07:20.697 CC lib/env_dpdk/sigbus_handler.o 00:07:20.697 CC lib/env_dpdk/pci_dpdk.o 00:07:20.697 LIB libspdk_idxd.a 00:07:20.697 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:20.973 LIB libspdk_vmd.a 00:07:20.973 SO libspdk_idxd.so.12.1 00:07:20.973 SO libspdk_vmd.so.6.0 00:07:20.973 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:20.973 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:20.973 SYMLINK libspdk_idxd.so 00:07:20.973 SYMLINK libspdk_vmd.so 00:07:20.973 CC lib/jsonrpc/jsonrpc_server.o 00:07:20.973 CC lib/jsonrpc/jsonrpc_client.o 00:07:20.973 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:20.973 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:20.973 LIB libspdk_rdma_provider.a 00:07:21.232 SO libspdk_rdma_provider.so.7.0 00:07:21.232 SYMLINK libspdk_rdma_provider.so 00:07:21.232 LIB libspdk_jsonrpc.a 00:07:21.489 SO libspdk_jsonrpc.so.6.0 00:07:21.489 SYMLINK libspdk_jsonrpc.so 00:07:21.747 CC lib/rpc/rpc.o 00:07:22.005 LIB libspdk_rpc.a 00:07:22.005 SO libspdk_rpc.so.6.0 00:07:22.005 SYMLINK libspdk_rpc.so 00:07:22.263 LIB libspdk_env_dpdk.a 00:07:22.263 SO libspdk_env_dpdk.so.15.1 00:07:22.263 CC lib/keyring/keyring_rpc.o 00:07:22.263 CC lib/keyring/keyring.o 00:07:22.263 CC lib/trace/trace.o 00:07:22.263 CC lib/trace/trace_rpc.o 00:07:22.263 CC lib/trace/trace_flags.o 00:07:22.263 CC lib/notify/notify.o 00:07:22.521 CC lib/notify/notify_rpc.o 00:07:22.521 SYMLINK libspdk_env_dpdk.so 00:07:22.521 LIB libspdk_notify.a 00:07:22.521 SO libspdk_notify.so.6.0 00:07:22.779 LIB libspdk_keyring.a 00:07:22.779 SO libspdk_keyring.so.2.0 00:07:22.779 SYMLINK libspdk_notify.so 00:07:22.779 LIB libspdk_trace.a 00:07:22.779 SYMLINK libspdk_keyring.so 00:07:22.779 SO libspdk_trace.so.11.0 00:07:22.779 SYMLINK libspdk_trace.so 00:07:23.344 CC lib/thread/thread.o 00:07:23.344 CC lib/thread/iobuf.o 00:07:23.344 CC lib/sock/sock.o 00:07:23.344 CC lib/sock/sock_rpc.o 00:07:23.909 LIB libspdk_sock.a 00:07:23.909 SO libspdk_sock.so.10.0 00:07:23.909 SYMLINK libspdk_sock.so 00:07:24.167 CC lib/nvme/nvme_ctrlr.o 00:07:24.167 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:24.167 CC lib/nvme/nvme_fabric.o 00:07:24.167 CC lib/nvme/nvme_ns.o 00:07:24.167 CC lib/nvme/nvme_ns_cmd.o 00:07:24.167 CC lib/nvme/nvme_qpair.o 00:07:24.167 CC lib/nvme/nvme_pcie_common.o 00:07:24.167 CC lib/nvme/nvme.o 00:07:24.167 CC lib/nvme/nvme_pcie.o 00:07:25.141 CC lib/nvme/nvme_quirks.o 00:07:25.141 CC lib/nvme/nvme_transport.o 00:07:25.141 CC lib/nvme/nvme_discovery.o 00:07:25.398 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:25.398 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:25.398 LIB libspdk_thread.a 00:07:25.398 CC lib/nvme/nvme_tcp.o 00:07:25.398 CC lib/nvme/nvme_opal.o 00:07:25.398 SO libspdk_thread.so.11.0 00:07:25.656 SYMLINK libspdk_thread.so 00:07:25.656 CC lib/nvme/nvme_io_msg.o 00:07:25.656 CC lib/nvme/nvme_poll_group.o 00:07:25.914 CC lib/accel/accel.o 00:07:25.914 CC lib/nvme/nvme_zns.o 00:07:25.914 CC lib/blob/blobstore.o 00:07:26.171 CC lib/blob/request.o 00:07:26.171 CC lib/blob/zeroes.o 00:07:26.171 CC lib/init/json_config.o 00:07:26.171 CC lib/init/subsystem.o 00:07:26.429 CC lib/init/subsystem_rpc.o 00:07:26.429 CC lib/accel/accel_rpc.o 00:07:26.429 CC lib/blob/blob_bs_dev.o 00:07:26.429 CC lib/init/rpc.o 00:07:26.429 CC lib/nvme/nvme_stubs.o 00:07:26.429 CC lib/nvme/nvme_auth.o 00:07:26.429 CC lib/nvme/nvme_cuse.o 00:07:26.686 CC lib/accel/accel_sw.o 00:07:26.686 LIB libspdk_init.a 00:07:26.686 SO libspdk_init.so.6.0 00:07:26.686 SYMLINK libspdk_init.so 00:07:26.944 CC lib/virtio/virtio.o 00:07:26.944 CC lib/virtio/virtio_vhost_user.o 00:07:26.944 CC lib/fsdev/fsdev.o 00:07:27.202 LIB libspdk_accel.a 00:07:27.202 CC lib/event/app.o 00:07:27.202 SO libspdk_accel.so.16.0 00:07:27.202 CC lib/nvme/nvme_rdma.o 00:07:27.461 CC lib/fsdev/fsdev_io.o 00:07:27.461 CC lib/event/reactor.o 00:07:27.461 SYMLINK libspdk_accel.so 00:07:27.461 CC lib/event/log_rpc.o 00:07:27.461 CC lib/virtio/virtio_vfio_user.o 00:07:27.461 CC lib/virtio/virtio_pci.o 00:07:27.720 CC lib/event/app_rpc.o 00:07:27.720 CC lib/event/scheduler_static.o 00:07:27.720 CC lib/bdev/bdev.o 00:07:27.978 CC lib/bdev/bdev_rpc.o 00:07:27.978 CC lib/fsdev/fsdev_rpc.o 00:07:27.978 CC lib/bdev/bdev_zone.o 00:07:27.978 CC lib/bdev/part.o 00:07:27.978 LIB libspdk_virtio.a 00:07:27.978 CC lib/bdev/scsi_nvme.o 00:07:27.978 LIB libspdk_event.a 00:07:27.978 SO libspdk_virtio.so.7.0 00:07:27.978 SO libspdk_event.so.14.0 00:07:27.978 LIB libspdk_fsdev.a 00:07:27.978 SO libspdk_fsdev.so.2.0 00:07:27.978 SYMLINK libspdk_virtio.so 00:07:27.978 SYMLINK libspdk_event.so 00:07:28.237 SYMLINK libspdk_fsdev.so 00:07:28.494 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:29.060 LIB libspdk_nvme.a 00:07:29.319 LIB libspdk_fuse_dispatcher.a 00:07:29.319 SO libspdk_fuse_dispatcher.so.1.0 00:07:29.319 SO libspdk_nvme.so.15.0 00:07:29.319 SYMLINK libspdk_fuse_dispatcher.so 00:07:29.885 SYMLINK libspdk_nvme.so 00:07:30.820 LIB libspdk_blob.a 00:07:30.820 SO libspdk_blob.so.11.0 00:07:30.820 SYMLINK libspdk_blob.so 00:07:31.078 CC lib/blobfs/blobfs.o 00:07:31.078 CC lib/blobfs/tree.o 00:07:31.078 CC lib/lvol/lvol.o 00:07:31.646 LIB libspdk_bdev.a 00:07:31.907 SO libspdk_bdev.so.17.0 00:07:31.907 SYMLINK libspdk_bdev.so 00:07:32.168 CC lib/ftl/ftl_core.o 00:07:32.168 CC lib/ftl/ftl_debug.o 00:07:32.168 CC lib/ftl/ftl_layout.o 00:07:32.168 CC lib/ftl/ftl_init.o 00:07:32.168 CC lib/nbd/nbd.o 00:07:32.168 CC lib/nvmf/ctrlr.o 00:07:32.168 CC lib/scsi/dev.o 00:07:32.168 CC lib/ublk/ublk.o 00:07:32.426 LIB libspdk_blobfs.a 00:07:32.426 SO libspdk_blobfs.so.10.0 00:07:32.426 SYMLINK libspdk_blobfs.so 00:07:32.426 LIB libspdk_lvol.a 00:07:32.426 CC lib/ublk/ublk_rpc.o 00:07:32.426 CC lib/scsi/lun.o 00:07:32.426 CC lib/scsi/port.o 00:07:32.426 SO libspdk_lvol.so.10.0 00:07:32.685 CC lib/nbd/nbd_rpc.o 00:07:32.685 CC lib/ftl/ftl_io.o 00:07:32.685 SYMLINK libspdk_lvol.so 00:07:32.685 CC lib/scsi/scsi.o 00:07:32.685 CC lib/nvmf/ctrlr_discovery.o 00:07:32.685 CC lib/nvmf/ctrlr_bdev.o 00:07:32.685 CC lib/ftl/ftl_sb.o 00:07:32.685 CC lib/scsi/scsi_bdev.o 00:07:32.685 CC lib/scsi/scsi_pr.o 00:07:32.944 LIB libspdk_nbd.a 00:07:32.944 CC lib/nvmf/subsystem.o 00:07:32.944 SO libspdk_nbd.so.7.0 00:07:32.944 CC lib/nvmf/nvmf.o 00:07:32.944 SYMLINK libspdk_nbd.so 00:07:32.944 CC lib/ftl/ftl_l2p.o 00:07:32.944 CC lib/ftl/ftl_l2p_flat.o 00:07:33.202 LIB libspdk_ublk.a 00:07:33.202 SO libspdk_ublk.so.3.0 00:07:33.202 CC lib/nvmf/nvmf_rpc.o 00:07:33.202 CC lib/nvmf/transport.o 00:07:33.202 SYMLINK libspdk_ublk.so 00:07:33.202 CC lib/nvmf/tcp.o 00:07:33.202 CC lib/ftl/ftl_nv_cache.o 00:07:33.461 CC lib/nvmf/stubs.o 00:07:33.461 CC lib/scsi/scsi_rpc.o 00:07:33.719 CC lib/scsi/task.o 00:07:33.719 CC lib/nvmf/mdns_server.o 00:07:33.978 CC lib/nvmf/rdma.o 00:07:33.978 LIB libspdk_scsi.a 00:07:33.978 SO libspdk_scsi.so.9.0 00:07:33.978 CC lib/nvmf/auth.o 00:07:34.236 CC lib/ftl/ftl_band.o 00:07:34.236 SYMLINK libspdk_scsi.so 00:07:34.236 CC lib/ftl/ftl_band_ops.o 00:07:34.236 CC lib/ftl/ftl_writer.o 00:07:34.236 CC lib/ftl/ftl_rq.o 00:07:34.494 CC lib/ftl/ftl_reloc.o 00:07:34.494 CC lib/ftl/ftl_l2p_cache.o 00:07:34.494 CC lib/ftl/ftl_p2l.o 00:07:34.495 CC lib/ftl/ftl_p2l_log.o 00:07:34.753 CC lib/ftl/mngt/ftl_mngt.o 00:07:34.753 CC lib/iscsi/conn.o 00:07:34.753 CC lib/vhost/vhost.o 00:07:35.010 CC lib/vhost/vhost_rpc.o 00:07:35.010 CC lib/vhost/vhost_scsi.o 00:07:35.010 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:35.011 CC lib/iscsi/init_grp.o 00:07:35.304 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:35.304 CC lib/iscsi/iscsi.o 00:07:35.304 CC lib/vhost/vhost_blk.o 00:07:35.304 CC lib/iscsi/param.o 00:07:35.304 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:35.304 CC lib/iscsi/portal_grp.o 00:07:35.563 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:35.563 CC lib/iscsi/tgt_node.o 00:07:35.563 CC lib/vhost/rte_vhost_user.o 00:07:35.563 CC lib/iscsi/iscsi_subsystem.o 00:07:35.563 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:35.821 CC lib/iscsi/iscsi_rpc.o 00:07:35.821 CC lib/iscsi/task.o 00:07:36.079 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:36.079 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:36.079 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:36.079 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:36.337 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:36.337 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:36.337 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:36.337 CC lib/ftl/utils/ftl_conf.o 00:07:36.337 CC lib/ftl/utils/ftl_md.o 00:07:36.337 CC lib/ftl/utils/ftl_mempool.o 00:07:36.596 CC lib/ftl/utils/ftl_bitmap.o 00:07:36.596 CC lib/ftl/utils/ftl_property.o 00:07:36.596 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:36.596 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:36.596 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:36.854 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:36.854 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:36.854 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:36.854 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:36.854 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:36.854 LIB libspdk_nvmf.a 00:07:36.854 LIB libspdk_vhost.a 00:07:37.113 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:37.113 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:37.113 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:37.113 SO libspdk_vhost.so.8.0 00:07:37.113 SO libspdk_nvmf.so.20.0 00:07:37.113 LIB libspdk_iscsi.a 00:07:37.113 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:37.113 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:37.113 SO libspdk_iscsi.so.8.0 00:07:37.113 CC lib/ftl/base/ftl_base_dev.o 00:07:37.113 SYMLINK libspdk_vhost.so 00:07:37.113 CC lib/ftl/base/ftl_base_bdev.o 00:07:37.371 CC lib/ftl/ftl_trace.o 00:07:37.371 SYMLINK libspdk_nvmf.so 00:07:37.371 SYMLINK libspdk_iscsi.so 00:07:37.631 LIB libspdk_ftl.a 00:07:37.889 SO libspdk_ftl.so.9.0 00:07:38.149 SYMLINK libspdk_ftl.so 00:07:38.407 CC module/env_dpdk/env_dpdk_rpc.o 00:07:38.741 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:38.741 CC module/fsdev/aio/fsdev_aio.o 00:07:38.741 CC module/accel/ioat/accel_ioat.o 00:07:38.741 CC module/keyring/file/keyring.o 00:07:38.741 CC module/sock/posix/posix.o 00:07:38.741 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:38.741 CC module/blob/bdev/blob_bdev.o 00:07:38.741 CC module/scheduler/gscheduler/gscheduler.o 00:07:38.741 CC module/accel/error/accel_error.o 00:07:38.741 LIB libspdk_env_dpdk_rpc.a 00:07:38.741 SO libspdk_env_dpdk_rpc.so.6.0 00:07:38.741 CC module/keyring/file/keyring_rpc.o 00:07:38.741 LIB libspdk_scheduler_gscheduler.a 00:07:38.741 LIB libspdk_scheduler_dpdk_governor.a 00:07:38.741 SYMLINK libspdk_env_dpdk_rpc.so 00:07:38.741 CC module/accel/error/accel_error_rpc.o 00:07:38.741 SO libspdk_scheduler_gscheduler.so.4.0 00:07:38.741 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:38.741 LIB libspdk_scheduler_dynamic.a 00:07:38.741 CC module/accel/ioat/accel_ioat_rpc.o 00:07:39.001 SO libspdk_scheduler_dynamic.so.4.0 00:07:39.001 SYMLINK libspdk_scheduler_gscheduler.so 00:07:39.001 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:39.001 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:39.001 SYMLINK libspdk_scheduler_dynamic.so 00:07:39.001 CC module/fsdev/aio/linux_aio_mgr.o 00:07:39.001 LIB libspdk_keyring_file.a 00:07:39.001 LIB libspdk_accel_ioat.a 00:07:39.001 LIB libspdk_accel_error.a 00:07:39.001 SO libspdk_keyring_file.so.2.0 00:07:39.001 SO libspdk_accel_ioat.so.6.0 00:07:39.001 LIB libspdk_blob_bdev.a 00:07:39.001 SO libspdk_accel_error.so.2.0 00:07:39.001 CC module/keyring/linux/keyring.o 00:07:39.001 SO libspdk_blob_bdev.so.11.0 00:07:39.001 SYMLINK libspdk_keyring_file.so 00:07:39.001 CC module/accel/dsa/accel_dsa.o 00:07:39.262 SYMLINK libspdk_accel_ioat.so 00:07:39.262 SYMLINK libspdk_accel_error.so 00:07:39.262 CC module/accel/dsa/accel_dsa_rpc.o 00:07:39.262 CC module/keyring/linux/keyring_rpc.o 00:07:39.262 SYMLINK libspdk_blob_bdev.so 00:07:39.262 CC module/accel/iaa/accel_iaa.o 00:07:39.262 LIB libspdk_keyring_linux.a 00:07:39.521 SO libspdk_keyring_linux.so.1.0 00:07:39.521 LIB libspdk_fsdev_aio.a 00:07:39.521 SO libspdk_fsdev_aio.so.1.0 00:07:39.521 LIB libspdk_accel_dsa.a 00:07:39.521 CC module/bdev/delay/vbdev_delay.o 00:07:39.521 CC module/bdev/gpt/gpt.o 00:07:39.521 SYMLINK libspdk_keyring_linux.so 00:07:39.521 CC module/blobfs/bdev/blobfs_bdev.o 00:07:39.521 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:39.521 CC module/bdev/lvol/vbdev_lvol.o 00:07:39.521 CC module/bdev/error/vbdev_error.o 00:07:39.521 SO libspdk_accel_dsa.so.5.0 00:07:39.521 SYMLINK libspdk_fsdev_aio.so 00:07:39.521 CC module/bdev/error/vbdev_error_rpc.o 00:07:39.521 LIB libspdk_sock_posix.a 00:07:39.521 SYMLINK libspdk_accel_dsa.so 00:07:39.521 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:39.521 SO libspdk_sock_posix.so.6.0 00:07:39.780 CC module/accel/iaa/accel_iaa_rpc.o 00:07:39.780 SYMLINK libspdk_sock_posix.so 00:07:39.780 CC module/bdev/gpt/vbdev_gpt.o 00:07:39.780 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:39.780 LIB libspdk_blobfs_bdev.a 00:07:39.780 SO libspdk_blobfs_bdev.so.6.0 00:07:39.780 LIB libspdk_accel_iaa.a 00:07:39.780 SO libspdk_accel_iaa.so.3.0 00:07:39.780 SYMLINK libspdk_blobfs_bdev.so 00:07:39.780 LIB libspdk_bdev_error.a 00:07:40.038 SO libspdk_bdev_error.so.6.0 00:07:40.038 SYMLINK libspdk_accel_iaa.so 00:07:40.038 CC module/bdev/malloc/bdev_malloc.o 00:07:40.038 LIB libspdk_bdev_delay.a 00:07:40.038 SO libspdk_bdev_delay.so.6.0 00:07:40.038 SYMLINK libspdk_bdev_error.so 00:07:40.038 CC module/bdev/null/bdev_null.o 00:07:40.038 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:40.038 CC module/bdev/nvme/bdev_nvme.o 00:07:40.038 LIB libspdk_bdev_gpt.a 00:07:40.038 CC module/bdev/passthru/vbdev_passthru.o 00:07:40.038 SYMLINK libspdk_bdev_delay.so 00:07:40.038 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:40.038 SO libspdk_bdev_gpt.so.6.0 00:07:40.038 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:40.296 CC module/bdev/raid/bdev_raid.o 00:07:40.297 LIB libspdk_bdev_lvol.a 00:07:40.297 SYMLINK libspdk_bdev_gpt.so 00:07:40.297 SO libspdk_bdev_lvol.so.6.0 00:07:40.297 CC module/bdev/null/bdev_null_rpc.o 00:07:40.297 CC module/bdev/raid/bdev_raid_rpc.o 00:07:40.297 SYMLINK libspdk_bdev_lvol.so 00:07:40.555 LIB libspdk_bdev_passthru.a 00:07:40.555 LIB libspdk_bdev_malloc.a 00:07:40.555 CC module/bdev/split/vbdev_split.o 00:07:40.555 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:40.555 SO libspdk_bdev_passthru.so.6.0 00:07:40.555 SO libspdk_bdev_malloc.so.6.0 00:07:40.555 LIB libspdk_bdev_null.a 00:07:40.555 SO libspdk_bdev_null.so.6.0 00:07:40.555 CC module/bdev/xnvme/bdev_xnvme.o 00:07:40.555 SYMLINK libspdk_bdev_passthru.so 00:07:40.556 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:40.556 SYMLINK libspdk_bdev_malloc.so 00:07:40.556 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:40.556 CC module/bdev/split/vbdev_split_rpc.o 00:07:40.556 SYMLINK libspdk_bdev_null.so 00:07:40.814 LIB libspdk_bdev_split.a 00:07:40.814 CC module/bdev/aio/bdev_aio.o 00:07:40.814 SO libspdk_bdev_split.so.6.0 00:07:40.814 CC module/bdev/ftl/bdev_ftl.o 00:07:40.814 LIB libspdk_bdev_xnvme.a 00:07:40.814 LIB libspdk_bdev_zone_block.a 00:07:41.072 SO libspdk_bdev_xnvme.so.3.0 00:07:41.072 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:41.072 CC module/bdev/iscsi/bdev_iscsi.o 00:07:41.072 SO libspdk_bdev_zone_block.so.6.0 00:07:41.072 SYMLINK libspdk_bdev_split.so 00:07:41.072 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:41.072 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:41.072 SYMLINK libspdk_bdev_xnvme.so 00:07:41.072 CC module/bdev/aio/bdev_aio_rpc.o 00:07:41.072 SYMLINK libspdk_bdev_zone_block.so 00:07:41.072 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:41.072 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:41.330 CC module/bdev/nvme/nvme_rpc.o 00:07:41.330 CC module/bdev/nvme/bdev_mdns_client.o 00:07:41.330 LIB libspdk_bdev_ftl.a 00:07:41.330 LIB libspdk_bdev_aio.a 00:07:41.330 SO libspdk_bdev_ftl.so.6.0 00:07:41.330 SO libspdk_bdev_aio.so.6.0 00:07:41.330 SYMLINK libspdk_bdev_ftl.so 00:07:41.330 CC module/bdev/raid/bdev_raid_sb.o 00:07:41.330 CC module/bdev/raid/raid0.o 00:07:41.330 LIB libspdk_bdev_iscsi.a 00:07:41.330 SYMLINK libspdk_bdev_aio.so 00:07:41.330 CC module/bdev/nvme/vbdev_opal.o 00:07:41.330 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:41.330 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:41.587 SO libspdk_bdev_iscsi.so.6.0 00:07:41.587 SYMLINK libspdk_bdev_iscsi.so 00:07:41.587 CC module/bdev/raid/raid1.o 00:07:41.587 CC module/bdev/raid/concat.o 00:07:41.587 LIB libspdk_bdev_virtio.a 00:07:41.587 SO libspdk_bdev_virtio.so.6.0 00:07:41.911 SYMLINK libspdk_bdev_virtio.so 00:07:41.911 LIB libspdk_bdev_raid.a 00:07:41.911 SO libspdk_bdev_raid.so.6.0 00:07:42.170 SYMLINK libspdk_bdev_raid.so 00:07:43.552 LIB libspdk_bdev_nvme.a 00:07:43.552 SO libspdk_bdev_nvme.so.7.1 00:07:43.552 SYMLINK libspdk_bdev_nvme.so 00:07:44.120 CC module/event/subsystems/iobuf/iobuf.o 00:07:44.120 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:44.120 CC module/event/subsystems/vmd/vmd.o 00:07:44.120 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:44.120 CC module/event/subsystems/keyring/keyring.o 00:07:44.120 CC module/event/subsystems/sock/sock.o 00:07:44.120 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:44.120 CC module/event/subsystems/scheduler/scheduler.o 00:07:44.120 CC module/event/subsystems/fsdev/fsdev.o 00:07:44.379 LIB libspdk_event_keyring.a 00:07:44.379 SO libspdk_event_keyring.so.1.0 00:07:44.379 LIB libspdk_event_vmd.a 00:07:44.379 LIB libspdk_event_vhost_blk.a 00:07:44.379 LIB libspdk_event_sock.a 00:07:44.379 LIB libspdk_event_scheduler.a 00:07:44.379 LIB libspdk_event_fsdev.a 00:07:44.379 SO libspdk_event_vhost_blk.so.3.0 00:07:44.379 SO libspdk_event_vmd.so.6.0 00:07:44.379 SO libspdk_event_sock.so.5.0 00:07:44.379 SO libspdk_event_scheduler.so.4.0 00:07:44.638 SO libspdk_event_fsdev.so.1.0 00:07:44.638 LIB libspdk_event_iobuf.a 00:07:44.638 SYMLINK libspdk_event_keyring.so 00:07:44.638 SYMLINK libspdk_event_vhost_blk.so 00:07:44.638 SYMLINK libspdk_event_vmd.so 00:07:44.638 SYMLINK libspdk_event_scheduler.so 00:07:44.638 SO libspdk_event_iobuf.so.3.0 00:07:44.638 SYMLINK libspdk_event_fsdev.so 00:07:44.638 SYMLINK libspdk_event_sock.so 00:07:44.638 SYMLINK libspdk_event_iobuf.so 00:07:44.896 CC module/event/subsystems/accel/accel.o 00:07:45.173 LIB libspdk_event_accel.a 00:07:45.173 SO libspdk_event_accel.so.6.0 00:07:45.463 SYMLINK libspdk_event_accel.so 00:07:45.721 CC module/event/subsystems/bdev/bdev.o 00:07:45.979 LIB libspdk_event_bdev.a 00:07:45.979 SO libspdk_event_bdev.so.6.0 00:07:45.979 SYMLINK libspdk_event_bdev.so 00:07:46.237 CC module/event/subsystems/ublk/ublk.o 00:07:46.237 CC module/event/subsystems/nbd/nbd.o 00:07:46.237 CC module/event/subsystems/scsi/scsi.o 00:07:46.237 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:46.237 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:46.497 LIB libspdk_event_ublk.a 00:07:46.497 LIB libspdk_event_nbd.a 00:07:46.497 LIB libspdk_event_scsi.a 00:07:46.497 SO libspdk_event_ublk.so.3.0 00:07:46.497 SO libspdk_event_nbd.so.6.0 00:07:46.497 SO libspdk_event_scsi.so.6.0 00:07:46.756 LIB libspdk_event_nvmf.a 00:07:46.756 SYMLINK libspdk_event_ublk.so 00:07:46.756 SYMLINK libspdk_event_nbd.so 00:07:46.756 SYMLINK libspdk_event_scsi.so 00:07:46.756 SO libspdk_event_nvmf.so.6.0 00:07:46.756 SYMLINK libspdk_event_nvmf.so 00:07:47.015 CC module/event/subsystems/iscsi/iscsi.o 00:07:47.015 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:47.272 LIB libspdk_event_vhost_scsi.a 00:07:47.272 SO libspdk_event_vhost_scsi.so.3.0 00:07:47.272 LIB libspdk_event_iscsi.a 00:07:47.272 SO libspdk_event_iscsi.so.6.0 00:07:47.272 SYMLINK libspdk_event_vhost_scsi.so 00:07:47.272 SYMLINK libspdk_event_iscsi.so 00:07:47.530 SO libspdk.so.6.0 00:07:47.530 SYMLINK libspdk.so 00:07:47.789 CXX app/trace/trace.o 00:07:47.789 CC app/trace_record/trace_record.o 00:07:47.789 CC test/rpc_client/rpc_client_test.o 00:07:48.047 TEST_HEADER include/spdk/accel.h 00:07:48.047 TEST_HEADER include/spdk/accel_module.h 00:07:48.047 TEST_HEADER include/spdk/assert.h 00:07:48.047 TEST_HEADER include/spdk/barrier.h 00:07:48.047 TEST_HEADER include/spdk/base64.h 00:07:48.047 TEST_HEADER include/spdk/bdev.h 00:07:48.047 TEST_HEADER include/spdk/bdev_module.h 00:07:48.047 TEST_HEADER include/spdk/bdev_zone.h 00:07:48.047 TEST_HEADER include/spdk/bit_array.h 00:07:48.047 TEST_HEADER include/spdk/bit_pool.h 00:07:48.047 TEST_HEADER include/spdk/blob_bdev.h 00:07:48.047 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:48.047 TEST_HEADER include/spdk/blobfs.h 00:07:48.047 TEST_HEADER include/spdk/blob.h 00:07:48.047 TEST_HEADER include/spdk/conf.h 00:07:48.047 TEST_HEADER include/spdk/config.h 00:07:48.047 TEST_HEADER include/spdk/cpuset.h 00:07:48.047 TEST_HEADER include/spdk/crc16.h 00:07:48.047 TEST_HEADER include/spdk/crc32.h 00:07:48.047 TEST_HEADER include/spdk/crc64.h 00:07:48.047 TEST_HEADER include/spdk/dif.h 00:07:48.047 TEST_HEADER include/spdk/dma.h 00:07:48.047 TEST_HEADER include/spdk/endian.h 00:07:48.047 TEST_HEADER include/spdk/env_dpdk.h 00:07:48.047 TEST_HEADER include/spdk/env.h 00:07:48.047 TEST_HEADER include/spdk/event.h 00:07:48.047 TEST_HEADER include/spdk/fd_group.h 00:07:48.047 TEST_HEADER include/spdk/fd.h 00:07:48.047 TEST_HEADER include/spdk/file.h 00:07:48.047 TEST_HEADER include/spdk/fsdev.h 00:07:48.047 TEST_HEADER include/spdk/fsdev_module.h 00:07:48.047 TEST_HEADER include/spdk/ftl.h 00:07:48.047 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:48.047 CC examples/ioat/perf/perf.o 00:07:48.047 TEST_HEADER include/spdk/gpt_spec.h 00:07:48.047 CC test/thread/poller_perf/poller_perf.o 00:07:48.047 TEST_HEADER include/spdk/hexlify.h 00:07:48.047 TEST_HEADER include/spdk/histogram_data.h 00:07:48.047 TEST_HEADER include/spdk/idxd.h 00:07:48.047 TEST_HEADER include/spdk/idxd_spec.h 00:07:48.047 TEST_HEADER include/spdk/init.h 00:07:48.047 TEST_HEADER include/spdk/ioat.h 00:07:48.047 TEST_HEADER include/spdk/ioat_spec.h 00:07:48.047 TEST_HEADER include/spdk/iscsi_spec.h 00:07:48.047 CC examples/util/zipf/zipf.o 00:07:48.047 TEST_HEADER include/spdk/json.h 00:07:48.047 TEST_HEADER include/spdk/jsonrpc.h 00:07:48.047 TEST_HEADER include/spdk/keyring.h 00:07:48.047 TEST_HEADER include/spdk/keyring_module.h 00:07:48.047 CC test/dma/test_dma/test_dma.o 00:07:48.047 TEST_HEADER include/spdk/likely.h 00:07:48.047 TEST_HEADER include/spdk/log.h 00:07:48.047 TEST_HEADER include/spdk/lvol.h 00:07:48.047 TEST_HEADER include/spdk/md5.h 00:07:48.047 TEST_HEADER include/spdk/memory.h 00:07:48.047 TEST_HEADER include/spdk/mmio.h 00:07:48.047 TEST_HEADER include/spdk/nbd.h 00:07:48.047 TEST_HEADER include/spdk/net.h 00:07:48.047 TEST_HEADER include/spdk/notify.h 00:07:48.047 TEST_HEADER include/spdk/nvme.h 00:07:48.047 TEST_HEADER include/spdk/nvme_intel.h 00:07:48.047 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:48.047 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:48.047 TEST_HEADER include/spdk/nvme_spec.h 00:07:48.047 CC test/app/bdev_svc/bdev_svc.o 00:07:48.047 TEST_HEADER include/spdk/nvme_zns.h 00:07:48.047 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:48.047 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:48.047 TEST_HEADER include/spdk/nvmf.h 00:07:48.047 TEST_HEADER include/spdk/nvmf_spec.h 00:07:48.047 TEST_HEADER include/spdk/nvmf_transport.h 00:07:48.047 TEST_HEADER include/spdk/opal.h 00:07:48.047 TEST_HEADER include/spdk/opal_spec.h 00:07:48.047 TEST_HEADER include/spdk/pci_ids.h 00:07:48.047 TEST_HEADER include/spdk/pipe.h 00:07:48.047 TEST_HEADER include/spdk/queue.h 00:07:48.047 TEST_HEADER include/spdk/reduce.h 00:07:48.047 CC test/env/mem_callbacks/mem_callbacks.o 00:07:48.047 TEST_HEADER include/spdk/rpc.h 00:07:48.047 TEST_HEADER include/spdk/scheduler.h 00:07:48.047 TEST_HEADER include/spdk/scsi.h 00:07:48.047 TEST_HEADER include/spdk/scsi_spec.h 00:07:48.048 TEST_HEADER include/spdk/sock.h 00:07:48.048 TEST_HEADER include/spdk/stdinc.h 00:07:48.306 TEST_HEADER include/spdk/string.h 00:07:48.306 TEST_HEADER include/spdk/thread.h 00:07:48.306 TEST_HEADER include/spdk/trace.h 00:07:48.306 TEST_HEADER include/spdk/trace_parser.h 00:07:48.306 TEST_HEADER include/spdk/tree.h 00:07:48.306 TEST_HEADER include/spdk/ublk.h 00:07:48.306 TEST_HEADER include/spdk/util.h 00:07:48.306 TEST_HEADER include/spdk/uuid.h 00:07:48.306 LINK rpc_client_test 00:07:48.306 TEST_HEADER include/spdk/version.h 00:07:48.306 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:48.306 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:48.306 TEST_HEADER include/spdk/vhost.h 00:07:48.306 TEST_HEADER include/spdk/vmd.h 00:07:48.306 TEST_HEADER include/spdk/xor.h 00:07:48.306 TEST_HEADER include/spdk/zipf.h 00:07:48.306 CXX test/cpp_headers/accel.o 00:07:48.306 LINK poller_perf 00:07:48.306 LINK spdk_trace_record 00:07:48.306 LINK zipf 00:07:48.306 LINK ioat_perf 00:07:48.306 LINK bdev_svc 00:07:48.640 CXX test/cpp_headers/accel_module.o 00:07:48.640 CXX test/cpp_headers/assert.o 00:07:48.640 CXX test/cpp_headers/barrier.o 00:07:48.640 CXX test/cpp_headers/base64.o 00:07:48.640 CXX test/cpp_headers/bdev.o 00:07:48.640 CC examples/ioat/verify/verify.o 00:07:48.640 LINK spdk_trace 00:07:48.640 LINK test_dma 00:07:48.898 CC test/app/histogram_perf/histogram_perf.o 00:07:48.898 CC test/app/jsoncat/jsoncat.o 00:07:48.898 LINK mem_callbacks 00:07:48.898 CXX test/cpp_headers/bdev_module.o 00:07:48.898 CC test/env/vtophys/vtophys.o 00:07:48.898 LINK verify 00:07:48.898 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:48.898 CC test/event/event_perf/event_perf.o 00:07:48.898 LINK histogram_perf 00:07:48.898 CXX test/cpp_headers/bdev_zone.o 00:07:48.898 LINK jsoncat 00:07:48.898 CXX test/cpp_headers/bit_array.o 00:07:49.156 CC app/nvmf_tgt/nvmf_main.o 00:07:49.156 LINK vtophys 00:07:49.156 LINK event_perf 00:07:49.156 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:49.156 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:49.156 CXX test/cpp_headers/bit_pool.o 00:07:49.156 LINK nvmf_tgt 00:07:49.415 CC test/app/stub/stub.o 00:07:49.415 LINK nvme_fuzz 00:07:49.415 LINK interrupt_tgt 00:07:49.415 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:49.415 CXX test/cpp_headers/blob_bdev.o 00:07:49.415 LINK env_dpdk_post_init 00:07:49.415 CC test/accel/dif/dif.o 00:07:49.415 CC test/event/reactor/reactor.o 00:07:49.415 CC test/blobfs/mkfs/mkfs.o 00:07:49.674 LINK stub 00:07:49.674 CXX test/cpp_headers/blobfs_bdev.o 00:07:49.674 LINK reactor 00:07:49.674 LINK mkfs 00:07:49.932 CC test/event/reactor_perf/reactor_perf.o 00:07:49.932 CC test/env/memory/memory_ut.o 00:07:49.932 CC app/iscsi_tgt/iscsi_tgt.o 00:07:49.932 CXX test/cpp_headers/blobfs.o 00:07:49.932 CC test/event/app_repeat/app_repeat.o 00:07:49.932 CC examples/thread/thread/thread_ex.o 00:07:49.932 CXX test/cpp_headers/blob.o 00:07:49.932 LINK reactor_perf 00:07:50.190 CXX test/cpp_headers/conf.o 00:07:50.190 LINK iscsi_tgt 00:07:50.190 CXX test/cpp_headers/config.o 00:07:50.190 CXX test/cpp_headers/cpuset.o 00:07:50.190 LINK app_repeat 00:07:50.190 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:50.190 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:50.448 LINK thread 00:07:50.448 CC test/event/scheduler/scheduler.o 00:07:50.448 CXX test/cpp_headers/crc16.o 00:07:50.448 LINK dif 00:07:50.448 CXX test/cpp_headers/crc32.o 00:07:50.706 CC app/spdk_tgt/spdk_tgt.o 00:07:50.706 CXX test/cpp_headers/crc64.o 00:07:50.706 LINK scheduler 00:07:50.706 CC test/lvol/esnap/esnap.o 00:07:50.706 LINK vhost_fuzz 00:07:50.965 CXX test/cpp_headers/dif.o 00:07:50.965 CC test/env/pci/pci_ut.o 00:07:50.965 LINK spdk_tgt 00:07:50.966 CC examples/sock/hello_world/hello_sock.o 00:07:50.966 CC test/nvme/aer/aer.o 00:07:50.966 CXX test/cpp_headers/dma.o 00:07:51.225 CC test/nvme/reset/reset.o 00:07:51.225 CC app/spdk_lspci/spdk_lspci.o 00:07:51.225 CXX test/cpp_headers/endian.o 00:07:51.225 LINK hello_sock 00:07:51.225 LINK spdk_lspci 00:07:51.225 LINK memory_ut 00:07:51.485 LINK aer 00:07:51.485 CXX test/cpp_headers/env_dpdk.o 00:07:51.485 CC app/spdk_nvme_perf/perf.o 00:07:51.485 LINK pci_ut 00:07:51.485 LINK reset 00:07:51.744 CC examples/vmd/lsvmd/lsvmd.o 00:07:51.744 CXX test/cpp_headers/env.o 00:07:51.744 CXX test/cpp_headers/event.o 00:07:51.744 CC examples/idxd/perf/perf.o 00:07:51.744 CC examples/vmd/led/led.o 00:07:51.744 LINK iscsi_fuzz 00:07:51.744 CC test/nvme/sgl/sgl.o 00:07:51.744 CXX test/cpp_headers/fd_group.o 00:07:51.744 LINK lsvmd 00:07:52.003 CXX test/cpp_headers/fd.o 00:07:52.003 LINK led 00:07:52.003 CXX test/cpp_headers/file.o 00:07:52.003 CC app/spdk_nvme_identify/identify.o 00:07:52.003 CXX test/cpp_headers/fsdev.o 00:07:52.003 CC app/spdk_nvme_discover/discovery_aer.o 00:07:52.261 LINK sgl 00:07:52.261 LINK idxd_perf 00:07:52.261 CC app/spdk_top/spdk_top.o 00:07:52.261 CC test/nvme/e2edp/nvme_dp.o 00:07:52.261 CXX test/cpp_headers/fsdev_module.o 00:07:52.522 LINK spdk_nvme_discover 00:07:52.522 CXX test/cpp_headers/ftl.o 00:07:52.522 CC test/nvme/overhead/overhead.o 00:07:52.522 CC test/bdev/bdevio/bdevio.o 00:07:52.781 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:52.781 LINK nvme_dp 00:07:52.781 CXX test/cpp_headers/fuse_dispatcher.o 00:07:52.781 LINK spdk_nvme_perf 00:07:53.039 CC app/vhost/vhost.o 00:07:53.039 CXX test/cpp_headers/gpt_spec.o 00:07:53.039 LINK hello_fsdev 00:07:53.039 LINK overhead 00:07:53.039 CC app/spdk_dd/spdk_dd.o 00:07:53.039 LINK bdevio 00:07:53.297 LINK vhost 00:07:53.297 CXX test/cpp_headers/hexlify.o 00:07:53.297 LINK spdk_nvme_identify 00:07:53.297 CC app/fio/nvme/fio_plugin.o 00:07:53.557 CC test/nvme/err_injection/err_injection.o 00:07:53.557 CXX test/cpp_headers/histogram_data.o 00:07:53.557 LINK spdk_top 00:07:53.557 CC test/nvme/startup/startup.o 00:07:53.557 LINK spdk_dd 00:07:53.557 CC examples/accel/perf/accel_perf.o 00:07:53.557 CXX test/cpp_headers/idxd.o 00:07:53.557 CC app/fio/bdev/fio_plugin.o 00:07:53.816 LINK err_injection 00:07:53.816 LINK startup 00:07:53.816 CXX test/cpp_headers/idxd_spec.o 00:07:53.816 CC examples/blob/hello_world/hello_blob.o 00:07:53.816 CC examples/blob/cli/blobcli.o 00:07:54.075 CC test/nvme/reserve/reserve.o 00:07:54.075 CXX test/cpp_headers/init.o 00:07:54.075 LINK spdk_nvme 00:07:54.075 LINK hello_blob 00:07:54.075 CC examples/nvme/hello_world/hello_world.o 00:07:54.075 CXX test/cpp_headers/ioat.o 00:07:54.075 LINK accel_perf 00:07:54.335 CC examples/nvme/reconnect/reconnect.o 00:07:54.335 LINK reserve 00:07:54.335 CXX test/cpp_headers/ioat_spec.o 00:07:54.335 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:54.335 LINK spdk_bdev 00:07:54.335 CXX test/cpp_headers/iscsi_spec.o 00:07:54.335 CXX test/cpp_headers/json.o 00:07:54.335 LINK hello_world 00:07:54.335 LINK blobcli 00:07:54.594 CXX test/cpp_headers/jsonrpc.o 00:07:54.594 CXX test/cpp_headers/keyring.o 00:07:54.594 CC test/nvme/simple_copy/simple_copy.o 00:07:54.594 CC examples/nvme/arbitration/arbitration.o 00:07:54.594 LINK reconnect 00:07:54.594 CC examples/nvme/hotplug/hotplug.o 00:07:54.853 CXX test/cpp_headers/keyring_module.o 00:07:54.853 CC test/nvme/connect_stress/connect_stress.o 00:07:54.853 CC test/nvme/boot_partition/boot_partition.o 00:07:54.853 CC examples/bdev/hello_world/hello_bdev.o 00:07:54.853 LINK simple_copy 00:07:54.853 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:54.853 CXX test/cpp_headers/likely.o 00:07:54.853 LINK hotplug 00:07:55.112 LINK arbitration 00:07:55.112 LINK connect_stress 00:07:55.112 LINK boot_partition 00:07:55.112 CXX test/cpp_headers/log.o 00:07:55.112 LINK nvme_manage 00:07:55.112 LINK hello_bdev 00:07:55.112 LINK cmb_copy 00:07:55.370 CC test/nvme/compliance/nvme_compliance.o 00:07:55.370 CXX test/cpp_headers/lvol.o 00:07:55.370 CXX test/cpp_headers/md5.o 00:07:55.370 CC test/nvme/fused_ordering/fused_ordering.o 00:07:55.370 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:55.370 CC test/nvme/fdp/fdp.o 00:07:55.628 CXX test/cpp_headers/memory.o 00:07:55.628 CC examples/bdev/bdevperf/bdevperf.o 00:07:55.628 CC examples/nvme/abort/abort.o 00:07:55.628 CC test/nvme/cuse/cuse.o 00:07:55.628 CXX test/cpp_headers/mmio.o 00:07:55.628 LINK fused_ordering 00:07:55.628 LINK doorbell_aers 00:07:55.628 LINK nvme_compliance 00:07:55.628 CXX test/cpp_headers/nbd.o 00:07:55.885 CXX test/cpp_headers/net.o 00:07:55.885 CXX test/cpp_headers/notify.o 00:07:55.885 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:55.885 LINK fdp 00:07:55.885 CXX test/cpp_headers/nvme.o 00:07:55.885 CXX test/cpp_headers/nvme_intel.o 00:07:55.885 LINK pmr_persistence 00:07:55.886 CXX test/cpp_headers/nvme_ocssd.o 00:07:56.171 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:56.171 CXX test/cpp_headers/nvme_spec.o 00:07:56.171 LINK abort 00:07:56.171 CXX test/cpp_headers/nvme_zns.o 00:07:56.171 CXX test/cpp_headers/nvmf_cmd.o 00:07:56.171 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:56.171 CXX test/cpp_headers/nvmf.o 00:07:56.171 CXX test/cpp_headers/nvmf_spec.o 00:07:56.171 CXX test/cpp_headers/nvmf_transport.o 00:07:56.171 CXX test/cpp_headers/opal.o 00:07:56.430 CXX test/cpp_headers/opal_spec.o 00:07:56.430 CXX test/cpp_headers/pci_ids.o 00:07:56.430 CXX test/cpp_headers/pipe.o 00:07:56.430 CXX test/cpp_headers/queue.o 00:07:56.430 CXX test/cpp_headers/reduce.o 00:07:56.430 CXX test/cpp_headers/rpc.o 00:07:56.430 CXX test/cpp_headers/scheduler.o 00:07:56.430 CXX test/cpp_headers/scsi.o 00:07:56.688 CXX test/cpp_headers/scsi_spec.o 00:07:56.688 LINK bdevperf 00:07:56.688 CXX test/cpp_headers/sock.o 00:07:56.688 CXX test/cpp_headers/stdinc.o 00:07:56.688 CXX test/cpp_headers/string.o 00:07:56.688 CXX test/cpp_headers/thread.o 00:07:56.688 CXX test/cpp_headers/trace.o 00:07:56.688 CXX test/cpp_headers/trace_parser.o 00:07:56.688 CXX test/cpp_headers/tree.o 00:07:56.688 CXX test/cpp_headers/ublk.o 00:07:56.947 CXX test/cpp_headers/util.o 00:07:56.947 CXX test/cpp_headers/uuid.o 00:07:56.947 CXX test/cpp_headers/version.o 00:07:56.947 CXX test/cpp_headers/vfio_user_pci.o 00:07:56.947 CXX test/cpp_headers/vfio_user_spec.o 00:07:56.947 CXX test/cpp_headers/vhost.o 00:07:56.947 CXX test/cpp_headers/vmd.o 00:07:56.947 CXX test/cpp_headers/xor.o 00:07:56.947 CXX test/cpp_headers/zipf.o 00:07:56.947 CC examples/nvmf/nvmf/nvmf.o 00:07:57.205 LINK cuse 00:07:57.462 LINK nvmf 00:07:58.029 LINK esnap 00:07:58.596 00:07:58.596 real 1m38.705s 00:07:58.596 user 9m12.171s 00:07:58.596 sys 2m8.695s 00:07:58.596 13:33:52 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:58.596 13:33:52 make -- common/autotest_common.sh@10 -- $ set +x 00:07:58.596 ************************************ 00:07:58.596 END TEST make 00:07:58.596 ************************************ 00:07:58.596 13:33:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:58.596 13:33:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:58.596 13:33:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:58.596 13:33:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.596 13:33:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:58.596 13:33:52 -- pm/common@44 -- $ pid=5347 00:07:58.596 13:33:52 -- pm/common@50 -- $ kill -TERM 5347 00:07:58.596 13:33:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.596 13:33:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:58.596 13:33:52 -- pm/common@44 -- $ pid=5349 00:07:58.596 13:33:52 -- pm/common@50 -- $ kill -TERM 5349 00:07:58.596 13:33:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:58.596 13:33:52 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:58.596 13:33:52 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.596 13:33:52 -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.596 13:33:52 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:58.855 13:33:52 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:58.855 13:33:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.855 13:33:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.855 13:33:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.855 13:33:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.855 13:33:52 -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.855 13:33:52 -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.855 13:33:52 -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.855 13:33:52 -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.855 13:33:52 -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.855 13:33:52 -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.855 13:33:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.855 13:33:52 -- scripts/common.sh@344 -- # case "$op" in 00:07:58.855 13:33:52 -- scripts/common.sh@345 -- # : 1 00:07:58.855 13:33:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.855 13:33:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.855 13:33:52 -- scripts/common.sh@365 -- # decimal 1 00:07:58.855 13:33:52 -- scripts/common.sh@353 -- # local d=1 00:07:58.855 13:33:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.855 13:33:52 -- scripts/common.sh@355 -- # echo 1 00:07:58.855 13:33:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.855 13:33:52 -- scripts/common.sh@366 -- # decimal 2 00:07:58.855 13:33:52 -- scripts/common.sh@353 -- # local d=2 00:07:58.855 13:33:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.855 13:33:52 -- scripts/common.sh@355 -- # echo 2 00:07:58.855 13:33:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.855 13:33:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.855 13:33:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.855 13:33:52 -- scripts/common.sh@368 -- # return 0 00:07:58.855 13:33:52 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.855 13:33:52 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:58.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.855 --rc genhtml_branch_coverage=1 00:07:58.855 --rc genhtml_function_coverage=1 00:07:58.855 --rc genhtml_legend=1 00:07:58.855 --rc geninfo_all_blocks=1 00:07:58.855 --rc geninfo_unexecuted_blocks=1 00:07:58.855 00:07:58.855 ' 00:07:58.855 13:33:52 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:58.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.855 --rc genhtml_branch_coverage=1 00:07:58.855 --rc genhtml_function_coverage=1 00:07:58.855 --rc genhtml_legend=1 00:07:58.855 --rc geninfo_all_blocks=1 00:07:58.855 --rc geninfo_unexecuted_blocks=1 00:07:58.855 00:07:58.855 ' 00:07:58.855 13:33:52 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:58.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.855 --rc genhtml_branch_coverage=1 00:07:58.855 --rc genhtml_function_coverage=1 00:07:58.855 --rc genhtml_legend=1 00:07:58.855 --rc geninfo_all_blocks=1 00:07:58.855 --rc geninfo_unexecuted_blocks=1 00:07:58.855 00:07:58.855 ' 00:07:58.855 13:33:52 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:58.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.855 --rc genhtml_branch_coverage=1 00:07:58.855 --rc genhtml_function_coverage=1 00:07:58.855 --rc genhtml_legend=1 00:07:58.855 --rc geninfo_all_blocks=1 00:07:58.855 --rc geninfo_unexecuted_blocks=1 00:07:58.855 00:07:58.855 ' 00:07:58.855 13:33:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.855 13:33:52 -- nvmf/common.sh@7 -- # uname -s 00:07:58.855 13:33:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.855 13:33:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.856 13:33:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.856 13:33:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.856 13:33:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.856 13:33:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.856 13:33:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.856 13:33:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.856 13:33:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.856 13:33:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.856 13:33:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5bc0c953-5082-4147-bb80-66cd1b39e61f 00:07:58.856 13:33:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5bc0c953-5082-4147-bb80-66cd1b39e61f 00:07:58.856 13:33:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.856 13:33:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.856 13:33:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:58.856 13:33:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.856 13:33:52 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.856 13:33:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.856 13:33:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.856 13:33:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.856 13:33:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.856 13:33:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.856 13:33:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.856 13:33:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.856 13:33:52 -- paths/export.sh@5 -- # export PATH 00:07:58.856 13:33:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.856 13:33:52 -- nvmf/common.sh@51 -- # : 0 00:07:58.856 13:33:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.856 13:33:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.856 13:33:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.856 13:33:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.856 13:33:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.856 13:33:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.856 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.856 13:33:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.856 13:33:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.856 13:33:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.856 13:33:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:58.856 13:33:52 -- spdk/autotest.sh@32 -- # uname -s 00:07:58.856 13:33:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:58.856 13:33:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:58.856 13:33:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:58.856 13:33:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:58.856 13:33:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:58.856 13:33:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:58.856 13:33:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:58.856 13:33:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:58.856 13:33:52 -- spdk/autotest.sh@48 -- # udevadm_pid=54902 00:07:58.856 13:33:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:58.856 13:33:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:58.856 13:33:52 -- pm/common@17 -- # local monitor 00:07:58.856 13:33:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.856 13:33:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.856 13:33:52 -- pm/common@25 -- # sleep 1 00:07:58.856 13:33:52 -- pm/common@21 -- # date +%s 00:07:58.856 13:33:52 -- pm/common@21 -- # date +%s 00:07:58.856 13:33:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730900032 00:07:58.856 13:33:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730900032 00:07:58.856 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730900032_collect-cpu-load.pm.log 00:07:58.856 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730900032_collect-vmstat.pm.log 00:07:59.791 13:33:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:59.791 13:33:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:59.791 13:33:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.791 13:33:53 -- common/autotest_common.sh@10 -- # set +x 00:07:59.791 13:33:53 -- spdk/autotest.sh@59 -- # create_test_list 00:07:59.791 13:33:53 -- common/autotest_common.sh@750 -- # xtrace_disable 00:07:59.791 13:33:53 -- common/autotest_common.sh@10 -- # set +x 00:08:00.049 13:33:53 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:00.049 13:33:53 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:00.049 13:33:53 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:00.049 13:33:53 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:00.049 13:33:53 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:00.049 13:33:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:00.049 13:33:53 -- common/autotest_common.sh@1455 -- # uname 00:08:00.049 13:33:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:08:00.049 13:33:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:00.049 13:33:53 -- common/autotest_common.sh@1475 -- # uname 00:08:00.049 13:33:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:08:00.049 13:33:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:00.049 13:33:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:00.049 lcov: LCOV version 1.15 00:08:00.049 13:33:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:18.210 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:18.210 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:36.302 13:34:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:36.302 13:34:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.302 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:08:36.302 13:34:27 -- spdk/autotest.sh@78 -- # rm -f 00:08:36.302 13:34:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:36.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.302 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:36.302 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:36.302 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:08:36.302 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:08:36.302 13:34:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:36.302 13:34:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:36.302 13:34:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:36.302 13:34:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:36.302 13:34:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:36.302 13:34:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:36.302 13:34:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:36.302 13:34:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:36.302 13:34:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:36.302 13:34:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:36.302 13:34:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:36.302 13:34:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:36.302 13:34:28 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:36.302 13:34:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:36.302 13:34:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:36.302 13:34:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:36.302 13:34:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:36.302 13:34:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:36.302 13:34:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:36.302 13:34:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:36.302 13:34:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:36.302 13:34:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:36.302 13:34:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:36.302 13:34:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:36.302 No valid GPT data, bailing 00:08:36.302 13:34:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:36.302 13:34:28 -- scripts/common.sh@394 -- # pt= 00:08:36.302 13:34:28 -- scripts/common.sh@395 -- # return 1 00:08:36.302 13:34:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:36.302 1+0 records in 00:08:36.302 1+0 records out 00:08:36.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123073 s, 85.2 MB/s 00:08:36.302 13:34:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:36.302 13:34:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:36.302 13:34:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:36.302 13:34:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:36.302 13:34:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:36.302 No valid GPT data, bailing 00:08:36.302 13:34:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:36.302 13:34:28 -- scripts/common.sh@394 -- # pt= 00:08:36.302 13:34:28 -- scripts/common.sh@395 -- # return 1 00:08:36.302 13:34:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:36.302 1+0 records in 00:08:36.302 1+0 records out 00:08:36.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459979 s, 228 MB/s 00:08:36.302 13:34:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:36.302 13:34:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:36.302 13:34:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:08:36.302 13:34:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:08:36.302 13:34:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:08:36.302 No valid GPT data, bailing 00:08:36.302 13:34:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:08:36.302 13:34:28 -- scripts/common.sh@394 -- # pt= 00:08:36.302 13:34:28 -- scripts/common.sh@395 -- # return 1 00:08:36.302 13:34:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:08:36.302 1+0 records in 00:08:36.302 1+0 records out 00:08:36.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00339645 s, 309 MB/s 00:08:36.302 13:34:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:36.302 13:34:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:36.302 13:34:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:08:36.302 13:34:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:08:36.302 13:34:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:08:36.302 No valid GPT data, bailing 00:08:36.302 13:34:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:08:36.302 13:34:29 -- scripts/common.sh@394 -- # pt= 00:08:36.302 13:34:29 -- scripts/common.sh@395 -- # return 1 00:08:36.302 13:34:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:08:36.302 1+0 records in 00:08:36.302 1+0 records out 00:08:36.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542605 s, 193 MB/s 00:08:36.302 13:34:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:36.302 13:34:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:36.302 13:34:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:08:36.302 13:34:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:08:36.302 13:34:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:08:36.302 No valid GPT data, bailing 00:08:36.302 13:34:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:08:36.302 13:34:29 -- scripts/common.sh@394 -- # pt= 00:08:36.302 13:34:29 -- scripts/common.sh@395 -- # return 1 00:08:36.302 13:34:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:08:36.302 1+0 records in 00:08:36.302 1+0 records out 00:08:36.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475794 s, 220 MB/s 00:08:36.302 13:34:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:36.302 13:34:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:36.302 13:34:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:08:36.302 13:34:29 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:08:36.302 13:34:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:08:36.302 No valid GPT data, bailing 00:08:36.302 13:34:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:08:36.302 13:34:29 -- scripts/common.sh@394 -- # pt= 00:08:36.302 13:34:29 -- scripts/common.sh@395 -- # return 1 00:08:36.302 13:34:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:08:36.302 1+0 records in 00:08:36.302 1+0 records out 00:08:36.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449458 s, 233 MB/s 00:08:36.302 13:34:29 -- spdk/autotest.sh@105 -- # sync 00:08:36.302 13:34:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:36.302 13:34:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:36.302 13:34:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:37.675 13:34:31 -- spdk/autotest.sh@111 -- # uname -s 00:08:37.675 13:34:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:37.675 13:34:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:37.675 13:34:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:38.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.826 Hugepages 00:08:38.826 node hugesize free / total 00:08:38.826 node0 1048576kB 0 / 0 00:08:38.826 node0 2048kB 0 / 0 00:08:38.826 00:08:38.826 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:38.826 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:39.095 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:39.095 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:39.095 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:08:39.367 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:39.367 13:34:33 -- spdk/autotest.sh@117 -- # uname -s 00:08:39.367 13:34:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:39.367 13:34:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:39.367 13:34:33 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:39.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.523 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.523 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.523 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.782 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.782 13:34:34 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:41.718 13:34:35 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:41.718 13:34:35 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:41.718 13:34:35 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:41.718 13:34:35 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:41.718 13:34:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:41.718 13:34:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:41.718 13:34:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:41.718 13:34:35 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:41.718 13:34:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:41.718 13:34:35 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:41.718 13:34:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:41.718 13:34:35 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:42.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.284 Waiting for block devices as requested 00:08:42.543 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.543 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.543 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.801 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.069 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:48.069 13:34:41 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:48.069 13:34:41 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:08:48.069 13:34:41 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:48.069 13:34:41 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1541 -- # continue 00:08:48.069 13:34:41 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:48.069 13:34:41 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1541 -- # continue 00:08:48.069 13:34:41 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:48.069 13:34:41 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1541 -- # continue 00:08:48.069 13:34:41 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:08:48.069 13:34:41 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:48.069 13:34:41 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:48.069 13:34:41 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:48.069 13:34:41 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:48.069 13:34:41 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:48.069 13:34:41 -- common/autotest_common.sh@1541 -- # continue 00:08:48.069 13:34:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:48.069 13:34:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.069 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:08:48.069 13:34:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:48.069 13:34:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.069 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:08:48.069 13:34:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:48.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.572 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.572 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.572 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.572 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.572 13:34:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:49.572 13:34:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.572 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:08:49.572 13:34:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:49.572 13:34:43 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:49.572 13:34:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:49.572 13:34:43 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:49.572 13:34:43 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:49.572 13:34:43 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:49.572 13:34:43 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:49.572 13:34:43 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:49.572 13:34:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:49.572 13:34:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:49.572 13:34:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.572 13:34:43 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.572 13:34:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:49.830 13:34:43 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:49.830 13:34:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:49.830 13:34:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:49.830 13:34:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:49.830 13:34:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:49.830 13:34:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:49.830 13:34:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:49.830 13:34:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:49.830 13:34:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:08:49.830 13:34:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:49.830 13:34:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:49.830 13:34:43 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:08:49.830 13:34:43 -- common/autotest_common.sh@1570 -- # return 0 00:08:49.830 13:34:43 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:08:49.830 13:34:43 -- common/autotest_common.sh@1578 -- # return 0 00:08:49.830 13:34:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:49.830 13:34:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:49.830 13:34:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:49.830 13:34:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:49.830 13:34:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:49.830 13:34:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.830 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:08:49.830 13:34:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:49.830 13:34:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:49.830 13:34:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:49.830 13:34:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.830 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:08:49.830 ************************************ 00:08:49.830 START TEST env 00:08:49.830 ************************************ 00:08:49.830 13:34:43 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:49.830 * Looking for test storage... 00:08:49.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:49.830 13:34:43 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.830 13:34:43 env -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.830 13:34:43 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:50.088 13:34:43 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:50.088 13:34:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.088 13:34:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.088 13:34:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.088 13:34:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.088 13:34:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.088 13:34:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.088 13:34:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.088 13:34:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.088 13:34:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.088 13:34:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.088 13:34:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.089 13:34:43 env -- scripts/common.sh@344 -- # case "$op" in 00:08:50.089 13:34:43 env -- scripts/common.sh@345 -- # : 1 00:08:50.089 13:34:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.089 13:34:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.089 13:34:43 env -- scripts/common.sh@365 -- # decimal 1 00:08:50.089 13:34:43 env -- scripts/common.sh@353 -- # local d=1 00:08:50.089 13:34:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.089 13:34:43 env -- scripts/common.sh@355 -- # echo 1 00:08:50.089 13:34:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.089 13:34:43 env -- scripts/common.sh@366 -- # decimal 2 00:08:50.089 13:34:43 env -- scripts/common.sh@353 -- # local d=2 00:08:50.089 13:34:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.089 13:34:43 env -- scripts/common.sh@355 -- # echo 2 00:08:50.089 13:34:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.089 13:34:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.089 13:34:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.089 13:34:43 env -- scripts/common.sh@368 -- # return 0 00:08:50.089 13:34:43 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.089 13:34:43 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:50.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.089 --rc genhtml_branch_coverage=1 00:08:50.089 --rc genhtml_function_coverage=1 00:08:50.089 --rc genhtml_legend=1 00:08:50.089 --rc geninfo_all_blocks=1 00:08:50.089 --rc geninfo_unexecuted_blocks=1 00:08:50.089 00:08:50.089 ' 00:08:50.089 13:34:43 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:50.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.089 --rc genhtml_branch_coverage=1 00:08:50.089 --rc genhtml_function_coverage=1 00:08:50.089 --rc genhtml_legend=1 00:08:50.089 --rc geninfo_all_blocks=1 00:08:50.089 --rc geninfo_unexecuted_blocks=1 00:08:50.089 00:08:50.089 ' 00:08:50.089 13:34:43 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:50.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.089 --rc genhtml_branch_coverage=1 00:08:50.089 --rc genhtml_function_coverage=1 00:08:50.089 --rc genhtml_legend=1 00:08:50.089 --rc geninfo_all_blocks=1 00:08:50.089 --rc geninfo_unexecuted_blocks=1 00:08:50.089 00:08:50.089 ' 00:08:50.089 13:34:43 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:50.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.089 --rc genhtml_branch_coverage=1 00:08:50.089 --rc genhtml_function_coverage=1 00:08:50.089 --rc genhtml_legend=1 00:08:50.089 --rc geninfo_all_blocks=1 00:08:50.089 --rc geninfo_unexecuted_blocks=1 00:08:50.089 00:08:50.089 ' 00:08:50.089 13:34:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:50.089 13:34:43 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:50.089 13:34:43 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.089 13:34:43 env -- common/autotest_common.sh@10 -- # set +x 00:08:50.089 ************************************ 00:08:50.089 START TEST env_memory 00:08:50.089 ************************************ 00:08:50.089 13:34:43 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:50.089 00:08:50.089 00:08:50.089 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.089 http://cunit.sourceforge.net/ 00:08:50.089 00:08:50.089 00:08:50.089 Suite: memory 00:08:50.089 Test: alloc and free memory map ...[2024-11-06 13:34:43.985850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:50.089 passed 00:08:50.089 Test: mem map translation ...[2024-11-06 13:34:44.058330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:50.089 [2024-11-06 13:34:44.058451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:50.089 [2024-11-06 13:34:44.058580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:50.089 [2024-11-06 13:34:44.058625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:50.347 passed 00:08:50.347 Test: mem map registration ...[2024-11-06 13:34:44.170319] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:50.347 [2024-11-06 13:34:44.170421] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:50.347 passed 00:08:50.347 Test: mem map adjacent registrations ...passed 00:08:50.347 00:08:50.347 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.347 suites 1 1 n/a 0 0 00:08:50.347 tests 4 4 4 0 0 00:08:50.347 asserts 152 152 152 0 n/a 00:08:50.347 00:08:50.347 Elapsed time = 0.383 seconds 00:08:50.606 ************************************ 00:08:50.606 END TEST env_memory 00:08:50.606 ************************************ 00:08:50.606 00:08:50.606 real 0m0.430s 00:08:50.606 user 0m0.394s 00:08:50.606 sys 0m0.027s 00:08:50.606 13:34:44 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.606 13:34:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:50.606 13:34:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:50.606 13:34:44 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:50.606 13:34:44 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.606 13:34:44 env -- common/autotest_common.sh@10 -- # set +x 00:08:50.606 ************************************ 00:08:50.606 START TEST env_vtophys 00:08:50.606 ************************************ 00:08:50.606 13:34:44 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:50.606 EAL: lib.eal log level changed from notice to debug 00:08:50.606 EAL: Detected lcore 0 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 1 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 2 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 3 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 4 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 5 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 6 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 7 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 8 as core 0 on socket 0 00:08:50.606 EAL: Detected lcore 9 as core 0 on socket 0 00:08:50.606 EAL: Maximum logical cores by configuration: 128 00:08:50.606 EAL: Detected CPU lcores: 10 00:08:50.606 EAL: Detected NUMA nodes: 1 00:08:50.606 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:50.606 EAL: Detected shared linkage of DPDK 00:08:50.606 EAL: No shared files mode enabled, IPC will be disabled 00:08:50.606 EAL: Selected IOVA mode 'PA' 00:08:50.606 EAL: Probing VFIO support... 00:08:50.606 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:50.606 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:50.606 EAL: Ask a virtual area of 0x2e000 bytes 00:08:50.606 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:50.606 EAL: Setting up physically contiguous memory... 00:08:50.606 EAL: Setting maximum number of open files to 524288 00:08:50.606 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:50.606 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:50.606 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.606 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:50.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.606 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.606 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:50.606 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:50.606 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.606 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:50.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.606 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.606 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:50.606 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:50.606 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.606 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:50.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.606 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.606 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:50.606 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:50.606 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.606 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:50.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.606 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.606 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:50.606 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:50.606 EAL: Hugepages will be freed exactly as allocated. 00:08:50.606 EAL: No shared files mode enabled, IPC is disabled 00:08:50.606 EAL: No shared files mode enabled, IPC is disabled 00:08:50.866 EAL: TSC frequency is ~2100000 KHz 00:08:50.866 EAL: Main lcore 0 is ready (tid=7f3bc2929a40;cpuset=[0]) 00:08:50.866 EAL: Trying to obtain current memory policy. 00:08:50.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.866 EAL: Restoring previous memory policy: 0 00:08:50.866 EAL: request: mp_malloc_sync 00:08:50.866 EAL: No shared files mode enabled, IPC is disabled 00:08:50.866 EAL: Heap on socket 0 was expanded by 2MB 00:08:50.866 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:50.866 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:50.866 EAL: Mem event callback 'spdk:(nil)' registered 00:08:50.866 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:50.866 00:08:50.866 00:08:50.866 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.866 http://cunit.sourceforge.net/ 00:08:50.866 00:08:50.866 00:08:50.866 Suite: components_suite 00:08:51.434 Test: vtophys_malloc_test ...passed 00:08:51.434 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:51.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.434 EAL: Restoring previous memory policy: 4 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was expanded by 4MB 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was shrunk by 4MB 00:08:51.434 EAL: Trying to obtain current memory policy. 00:08:51.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.434 EAL: Restoring previous memory policy: 4 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was expanded by 6MB 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was shrunk by 6MB 00:08:51.434 EAL: Trying to obtain current memory policy. 00:08:51.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.434 EAL: Restoring previous memory policy: 4 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was expanded by 10MB 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was shrunk by 10MB 00:08:51.434 EAL: Trying to obtain current memory policy. 00:08:51.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.434 EAL: Restoring previous memory policy: 4 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was expanded by 18MB 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was shrunk by 18MB 00:08:51.434 EAL: Trying to obtain current memory policy. 00:08:51.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.434 EAL: Restoring previous memory policy: 4 00:08:51.434 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.434 EAL: request: mp_malloc_sync 00:08:51.434 EAL: No shared files mode enabled, IPC is disabled 00:08:51.434 EAL: Heap on socket 0 was expanded by 34MB 00:08:51.693 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.693 EAL: request: mp_malloc_sync 00:08:51.693 EAL: No shared files mode enabled, IPC is disabled 00:08:51.693 EAL: Heap on socket 0 was shrunk by 34MB 00:08:51.693 EAL: Trying to obtain current memory policy. 00:08:51.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.693 EAL: Restoring previous memory policy: 4 00:08:51.693 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.693 EAL: request: mp_malloc_sync 00:08:51.693 EAL: No shared files mode enabled, IPC is disabled 00:08:51.693 EAL: Heap on socket 0 was expanded by 66MB 00:08:51.693 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.693 EAL: request: mp_malloc_sync 00:08:51.693 EAL: No shared files mode enabled, IPC is disabled 00:08:51.693 EAL: Heap on socket 0 was shrunk by 66MB 00:08:51.952 EAL: Trying to obtain current memory policy. 00:08:51.952 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.952 EAL: Restoring previous memory policy: 4 00:08:51.952 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.952 EAL: request: mp_malloc_sync 00:08:51.952 EAL: No shared files mode enabled, IPC is disabled 00:08:51.952 EAL: Heap on socket 0 was expanded by 130MB 00:08:52.210 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.210 EAL: request: mp_malloc_sync 00:08:52.210 EAL: No shared files mode enabled, IPC is disabled 00:08:52.210 EAL: Heap on socket 0 was shrunk by 130MB 00:08:52.469 EAL: Trying to obtain current memory policy. 00:08:52.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.469 EAL: Restoring previous memory policy: 4 00:08:52.469 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.469 EAL: request: mp_malloc_sync 00:08:52.469 EAL: No shared files mode enabled, IPC is disabled 00:08:52.469 EAL: Heap on socket 0 was expanded by 258MB 00:08:53.036 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.036 EAL: request: mp_malloc_sync 00:08:53.036 EAL: No shared files mode enabled, IPC is disabled 00:08:53.036 EAL: Heap on socket 0 was shrunk by 258MB 00:08:53.601 EAL: Trying to obtain current memory policy. 00:08:53.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:53.601 EAL: Restoring previous memory policy: 4 00:08:53.601 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.601 EAL: request: mp_malloc_sync 00:08:53.602 EAL: No shared files mode enabled, IPC is disabled 00:08:53.602 EAL: Heap on socket 0 was expanded by 514MB 00:08:54.975 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.975 EAL: request: mp_malloc_sync 00:08:54.975 EAL: No shared files mode enabled, IPC is disabled 00:08:54.975 EAL: Heap on socket 0 was shrunk by 514MB 00:08:55.910 EAL: Trying to obtain current memory policy. 00:08:55.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.910 EAL: Restoring previous memory policy: 4 00:08:55.910 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.910 EAL: request: mp_malloc_sync 00:08:55.910 EAL: No shared files mode enabled, IPC is disabled 00:08:55.910 EAL: Heap on socket 0 was expanded by 1026MB 00:08:58.441 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.441 EAL: request: mp_malloc_sync 00:08:58.441 EAL: No shared files mode enabled, IPC is disabled 00:08:58.441 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:00.344 passed 00:09:00.344 00:09:00.344 Run Summary: Type Total Ran Passed Failed Inactive 00:09:00.344 suites 1 1 n/a 0 0 00:09:00.344 tests 2 2 2 0 0 00:09:00.344 asserts 5719 5719 5719 0 n/a 00:09:00.344 00:09:00.344 Elapsed time = 9.361 seconds 00:09:00.344 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.344 EAL: request: mp_malloc_sync 00:09:00.344 EAL: No shared files mode enabled, IPC is disabled 00:09:00.344 EAL: Heap on socket 0 was shrunk by 2MB 00:09:00.344 EAL: No shared files mode enabled, IPC is disabled 00:09:00.344 EAL: No shared files mode enabled, IPC is disabled 00:09:00.344 EAL: No shared files mode enabled, IPC is disabled 00:09:00.344 00:09:00.344 real 0m9.769s 00:09:00.344 user 0m8.594s 00:09:00.344 sys 0m1.000s 00:09:00.344 13:34:54 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.344 ************************************ 00:09:00.344 END TEST env_vtophys 00:09:00.344 ************************************ 00:09:00.344 13:34:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:00.344 13:34:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:00.344 13:34:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:00.344 13:34:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.344 13:34:54 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.344 ************************************ 00:09:00.344 START TEST env_pci 00:09:00.344 ************************************ 00:09:00.344 13:34:54 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:00.344 00:09:00.344 00:09:00.344 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.344 http://cunit.sourceforge.net/ 00:09:00.344 00:09:00.344 00:09:00.344 Suite: pci 00:09:00.344 Test: pci_hook ...[2024-11-06 13:34:54.246167] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57795 has claimed it 00:09:00.344 passed 00:09:00.344 00:09:00.344 Run Summary: Type Total Ran Passed Failed Inactive 00:09:00.344 suites 1 1 n/a 0 0 00:09:00.344 tests 1 1 1 0 0 00:09:00.344 asserts 25 25 25 0 n/a 00:09:00.344 00:09:00.344 Elapsed time = 0.008 seconds 00:09:00.344 EAL: Cannot find device (10000:00:01.0) 00:09:00.344 EAL: Failed to attach device on primary process 00:09:00.344 00:09:00.344 real 0m0.098s 00:09:00.344 user 0m0.042s 00:09:00.344 sys 0m0.054s 00:09:00.344 ************************************ 00:09:00.344 END TEST env_pci 00:09:00.344 ************************************ 00:09:00.344 13:34:54 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.344 13:34:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:00.604 13:34:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:00.604 13:34:54 env -- env/env.sh@15 -- # uname 00:09:00.604 13:34:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:00.604 13:34:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:00.604 13:34:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:00.604 13:34:54 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:00.604 13:34:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.604 13:34:54 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.604 ************************************ 00:09:00.604 START TEST env_dpdk_post_init 00:09:00.604 ************************************ 00:09:00.604 13:34:54 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:00.604 EAL: Detected CPU lcores: 10 00:09:00.604 EAL: Detected NUMA nodes: 1 00:09:00.604 EAL: Detected shared linkage of DPDK 00:09:00.604 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:00.604 EAL: Selected IOVA mode 'PA' 00:09:00.604 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:00.863 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:00.863 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:00.863 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:09:00.863 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:09:00.863 Starting DPDK initialization... 00:09:00.863 Starting SPDK post initialization... 00:09:00.863 SPDK NVMe probe 00:09:00.863 Attaching to 0000:00:10.0 00:09:00.863 Attaching to 0000:00:11.0 00:09:00.863 Attaching to 0000:00:12.0 00:09:00.863 Attaching to 0000:00:13.0 00:09:00.863 Attached to 0000:00:10.0 00:09:00.863 Attached to 0000:00:11.0 00:09:00.863 Attached to 0000:00:13.0 00:09:00.863 Attached to 0000:00:12.0 00:09:00.863 Cleaning up... 00:09:00.863 ************************************ 00:09:00.863 END TEST env_dpdk_post_init 00:09:00.863 ************************************ 00:09:00.863 00:09:00.863 real 0m0.323s 00:09:00.863 user 0m0.112s 00:09:00.863 sys 0m0.112s 00:09:00.863 13:34:54 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.863 13:34:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:00.863 13:34:54 env -- env/env.sh@26 -- # uname 00:09:00.863 13:34:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:00.863 13:34:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:00.863 13:34:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:00.863 13:34:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.863 13:34:54 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.863 ************************************ 00:09:00.863 START TEST env_mem_callbacks 00:09:00.863 ************************************ 00:09:00.863 13:34:54 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:00.863 EAL: Detected CPU lcores: 10 00:09:00.863 EAL: Detected NUMA nodes: 1 00:09:00.863 EAL: Detected shared linkage of DPDK 00:09:00.863 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:00.863 EAL: Selected IOVA mode 'PA' 00:09:01.121 00:09:01.121 00:09:01.121 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.121 http://cunit.sourceforge.net/ 00:09:01.121 00:09:01.121 00:09:01.121 Suite: memory 00:09:01.121 Test: test ... 00:09:01.121 register 0x200000200000 2097152 00:09:01.121 malloc 3145728 00:09:01.121 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:01.121 register 0x200000400000 4194304 00:09:01.121 buf 0x2000004fffc0 len 3145728 PASSED 00:09:01.121 malloc 64 00:09:01.121 buf 0x2000004ffec0 len 64 PASSED 00:09:01.121 malloc 4194304 00:09:01.121 register 0x200000800000 6291456 00:09:01.121 buf 0x2000009fffc0 len 4194304 PASSED 00:09:01.121 free 0x2000004fffc0 3145728 00:09:01.121 free 0x2000004ffec0 64 00:09:01.121 unregister 0x200000400000 4194304 PASSED 00:09:01.121 free 0x2000009fffc0 4194304 00:09:01.121 unregister 0x200000800000 6291456 PASSED 00:09:01.121 malloc 8388608 00:09:01.121 register 0x200000400000 10485760 00:09:01.121 buf 0x2000005fffc0 len 8388608 PASSED 00:09:01.122 free 0x2000005fffc0 8388608 00:09:01.122 unregister 0x200000400000 10485760 PASSED 00:09:01.122 passed 00:09:01.122 00:09:01.122 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.122 suites 1 1 n/a 0 0 00:09:01.122 tests 1 1 1 0 0 00:09:01.122 asserts 15 15 15 0 n/a 00:09:01.122 00:09:01.122 Elapsed time = 0.085 seconds 00:09:01.122 00:09:01.122 real 0m0.292s 00:09:01.122 user 0m0.119s 00:09:01.122 sys 0m0.070s 00:09:01.122 13:34:55 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.122 13:34:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:01.122 ************************************ 00:09:01.122 END TEST env_mem_callbacks 00:09:01.122 ************************************ 00:09:01.122 00:09:01.122 real 0m11.413s 00:09:01.122 user 0m9.476s 00:09:01.122 sys 0m1.546s 00:09:01.122 13:34:55 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.122 13:34:55 env -- common/autotest_common.sh@10 -- # set +x 00:09:01.122 ************************************ 00:09:01.122 END TEST env 00:09:01.122 ************************************ 00:09:01.380 13:34:55 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:01.380 13:34:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:01.380 13:34:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.380 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:09:01.380 ************************************ 00:09:01.380 START TEST rpc 00:09:01.380 ************************************ 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:01.380 * Looking for test storage... 00:09:01.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.380 13:34:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.380 13:34:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.380 13:34:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.380 13:34:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.380 13:34:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.380 13:34:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.380 13:34:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.380 13:34:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:01.380 13:34:55 rpc -- scripts/common.sh@345 -- # : 1 00:09:01.380 13:34:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.380 13:34:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.380 13:34:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:01.380 13:34:55 rpc -- scripts/common.sh@353 -- # local d=1 00:09:01.380 13:34:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.380 13:34:55 rpc -- scripts/common.sh@355 -- # echo 1 00:09:01.380 13:34:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.380 13:34:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@353 -- # local d=2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.380 13:34:55 rpc -- scripts/common.sh@355 -- # echo 2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.380 13:34:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.380 13:34:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.380 13:34:55 rpc -- scripts/common.sh@368 -- # return 0 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.380 --rc genhtml_branch_coverage=1 00:09:01.380 --rc genhtml_function_coverage=1 00:09:01.380 --rc genhtml_legend=1 00:09:01.380 --rc geninfo_all_blocks=1 00:09:01.380 --rc geninfo_unexecuted_blocks=1 00:09:01.380 00:09:01.380 ' 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.380 --rc genhtml_branch_coverage=1 00:09:01.380 --rc genhtml_function_coverage=1 00:09:01.380 --rc genhtml_legend=1 00:09:01.380 --rc geninfo_all_blocks=1 00:09:01.380 --rc geninfo_unexecuted_blocks=1 00:09:01.380 00:09:01.380 ' 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.380 --rc genhtml_branch_coverage=1 00:09:01.380 --rc genhtml_function_coverage=1 00:09:01.380 --rc genhtml_legend=1 00:09:01.380 --rc geninfo_all_blocks=1 00:09:01.380 --rc geninfo_unexecuted_blocks=1 00:09:01.380 00:09:01.380 ' 00:09:01.380 13:34:55 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.380 --rc genhtml_branch_coverage=1 00:09:01.380 --rc genhtml_function_coverage=1 00:09:01.380 --rc genhtml_legend=1 00:09:01.381 --rc geninfo_all_blocks=1 00:09:01.381 --rc geninfo_unexecuted_blocks=1 00:09:01.381 00:09:01.381 ' 00:09:01.381 13:34:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57922 00:09:01.381 13:34:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:01.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.381 13:34:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:01.381 13:34:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57922 00:09:01.381 13:34:55 rpc -- common/autotest_common.sh@833 -- # '[' -z 57922 ']' 00:09:01.381 13:34:55 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.381 13:34:55 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.381 13:34:55 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.381 13:34:55 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.381 13:34:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.639 [2024-11-06 13:34:55.511960] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:01.639 [2024-11-06 13:34:55.512191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:09:01.897 [2024-11-06 13:34:55.716359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.154 [2024-11-06 13:34:55.902862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:02.154 [2024-11-06 13:34:55.902986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57922' to capture a snapshot of events at runtime. 00:09:02.154 [2024-11-06 13:34:55.903049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.155 [2024-11-06 13:34:55.903079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.155 [2024-11-06 13:34:55.903099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57922 for offline analysis/debug. 00:09:02.155 [2024-11-06 13:34:55.905138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.089 13:34:56 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:03.089 13:34:56 rpc -- common/autotest_common.sh@866 -- # return 0 00:09:03.089 13:34:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:03.089 13:34:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:03.089 13:34:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:03.089 13:34:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:03.089 13:34:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.089 13:34:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.089 13:34:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 ************************************ 00:09:03.089 START TEST rpc_integrity 00:09:03.089 ************************************ 00:09:03.089 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:09:03.089 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:03.089 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.089 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.089 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:03.089 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:03.089 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:03.089 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:03.089 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.089 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.348 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.348 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:03.348 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:03.348 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.348 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.348 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.348 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:03.348 { 00:09:03.348 "name": "Malloc0", 00:09:03.348 "aliases": [ 00:09:03.348 "011ecc01-1402-4b0d-a602-50b70e577d39" 00:09:03.348 ], 00:09:03.348 "product_name": "Malloc disk", 00:09:03.348 "block_size": 512, 00:09:03.348 "num_blocks": 16384, 00:09:03.348 "uuid": "011ecc01-1402-4b0d-a602-50b70e577d39", 00:09:03.348 "assigned_rate_limits": { 00:09:03.348 "rw_ios_per_sec": 0, 00:09:03.348 "rw_mbytes_per_sec": 0, 00:09:03.348 "r_mbytes_per_sec": 0, 00:09:03.348 "w_mbytes_per_sec": 0 00:09:03.348 }, 00:09:03.348 "claimed": false, 00:09:03.348 "zoned": false, 00:09:03.348 "supported_io_types": { 00:09:03.348 "read": true, 00:09:03.348 "write": true, 00:09:03.348 "unmap": true, 00:09:03.348 "flush": true, 00:09:03.348 "reset": true, 00:09:03.348 "nvme_admin": false, 00:09:03.348 "nvme_io": false, 00:09:03.348 "nvme_io_md": false, 00:09:03.348 "write_zeroes": true, 00:09:03.348 "zcopy": true, 00:09:03.348 "get_zone_info": false, 00:09:03.348 "zone_management": false, 00:09:03.348 "zone_append": false, 00:09:03.348 "compare": false, 00:09:03.348 "compare_and_write": false, 00:09:03.348 "abort": true, 00:09:03.348 "seek_hole": false, 00:09:03.348 "seek_data": false, 00:09:03.348 "copy": true, 00:09:03.348 "nvme_iov_md": false 00:09:03.348 }, 00:09:03.348 "memory_domains": [ 00:09:03.348 { 00:09:03.348 "dma_device_id": "system", 00:09:03.348 "dma_device_type": 1 00:09:03.348 }, 00:09:03.348 { 00:09:03.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.348 "dma_device_type": 2 00:09:03.348 } 00:09:03.348 ], 00:09:03.348 "driver_specific": {} 00:09:03.348 } 00:09:03.348 ]' 00:09:03.348 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:03.348 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:03.348 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:03.348 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.348 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.348 [2024-11-06 13:34:57.160777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:03.348 [2024-11-06 13:34:57.161076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.348 [2024-11-06 13:34:57.161134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:03.348 [2024-11-06 13:34:57.161161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.349 [2024-11-06 13:34:57.164234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.349 [2024-11-06 13:34:57.164295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:03.349 Passthru0 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:03.349 { 00:09:03.349 "name": "Malloc0", 00:09:03.349 "aliases": [ 00:09:03.349 "011ecc01-1402-4b0d-a602-50b70e577d39" 00:09:03.349 ], 00:09:03.349 "product_name": "Malloc disk", 00:09:03.349 "block_size": 512, 00:09:03.349 "num_blocks": 16384, 00:09:03.349 "uuid": "011ecc01-1402-4b0d-a602-50b70e577d39", 00:09:03.349 "assigned_rate_limits": { 00:09:03.349 "rw_ios_per_sec": 0, 00:09:03.349 "rw_mbytes_per_sec": 0, 00:09:03.349 "r_mbytes_per_sec": 0, 00:09:03.349 "w_mbytes_per_sec": 0 00:09:03.349 }, 00:09:03.349 "claimed": true, 00:09:03.349 "claim_type": "exclusive_write", 00:09:03.349 "zoned": false, 00:09:03.349 "supported_io_types": { 00:09:03.349 "read": true, 00:09:03.349 "write": true, 00:09:03.349 "unmap": true, 00:09:03.349 "flush": true, 00:09:03.349 "reset": true, 00:09:03.349 "nvme_admin": false, 00:09:03.349 "nvme_io": false, 00:09:03.349 "nvme_io_md": false, 00:09:03.349 "write_zeroes": true, 00:09:03.349 "zcopy": true, 00:09:03.349 "get_zone_info": false, 00:09:03.349 "zone_management": false, 00:09:03.349 "zone_append": false, 00:09:03.349 "compare": false, 00:09:03.349 "compare_and_write": false, 00:09:03.349 "abort": true, 00:09:03.349 "seek_hole": false, 00:09:03.349 "seek_data": false, 00:09:03.349 "copy": true, 00:09:03.349 "nvme_iov_md": false 00:09:03.349 }, 00:09:03.349 "memory_domains": [ 00:09:03.349 { 00:09:03.349 "dma_device_id": "system", 00:09:03.349 "dma_device_type": 1 00:09:03.349 }, 00:09:03.349 { 00:09:03.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.349 "dma_device_type": 2 00:09:03.349 } 00:09:03.349 ], 00:09:03.349 "driver_specific": {} 00:09:03.349 }, 00:09:03.349 { 00:09:03.349 "name": "Passthru0", 00:09:03.349 "aliases": [ 00:09:03.349 "f103d6c8-c7c3-5767-a48d-7c58fd9f47ad" 00:09:03.349 ], 00:09:03.349 "product_name": "passthru", 00:09:03.349 "block_size": 512, 00:09:03.349 "num_blocks": 16384, 00:09:03.349 "uuid": "f103d6c8-c7c3-5767-a48d-7c58fd9f47ad", 00:09:03.349 "assigned_rate_limits": { 00:09:03.349 "rw_ios_per_sec": 0, 00:09:03.349 "rw_mbytes_per_sec": 0, 00:09:03.349 "r_mbytes_per_sec": 0, 00:09:03.349 "w_mbytes_per_sec": 0 00:09:03.349 }, 00:09:03.349 "claimed": false, 00:09:03.349 "zoned": false, 00:09:03.349 "supported_io_types": { 00:09:03.349 "read": true, 00:09:03.349 "write": true, 00:09:03.349 "unmap": true, 00:09:03.349 "flush": true, 00:09:03.349 "reset": true, 00:09:03.349 "nvme_admin": false, 00:09:03.349 "nvme_io": false, 00:09:03.349 "nvme_io_md": false, 00:09:03.349 "write_zeroes": true, 00:09:03.349 "zcopy": true, 00:09:03.349 "get_zone_info": false, 00:09:03.349 "zone_management": false, 00:09:03.349 "zone_append": false, 00:09:03.349 "compare": false, 00:09:03.349 "compare_and_write": false, 00:09:03.349 "abort": true, 00:09:03.349 "seek_hole": false, 00:09:03.349 "seek_data": false, 00:09:03.349 "copy": true, 00:09:03.349 "nvme_iov_md": false 00:09:03.349 }, 00:09:03.349 "memory_domains": [ 00:09:03.349 { 00:09:03.349 "dma_device_id": "system", 00:09:03.349 "dma_device_type": 1 00:09:03.349 }, 00:09:03.349 { 00:09:03.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.349 "dma_device_type": 2 00:09:03.349 } 00:09:03.349 ], 00:09:03.349 "driver_specific": { 00:09:03.349 "passthru": { 00:09:03.349 "name": "Passthru0", 00:09:03.349 "base_bdev_name": "Malloc0" 00:09:03.349 } 00:09:03.349 } 00:09:03.349 } 00:09:03.349 ]' 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.349 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:03.349 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:03.608 ************************************ 00:09:03.608 END TEST rpc_integrity 00:09:03.608 ************************************ 00:09:03.608 13:34:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:03.608 00:09:03.608 real 0m0.342s 00:09:03.608 user 0m0.180s 00:09:03.608 sys 0m0.055s 00:09:03.608 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.608 13:34:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.608 13:34:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:03.608 13:34:57 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.608 13:34:57 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.608 13:34:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.608 ************************************ 00:09:03.608 START TEST rpc_plugins 00:09:03.608 ************************************ 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:03.608 { 00:09:03.608 "name": "Malloc1", 00:09:03.608 "aliases": [ 00:09:03.608 "c3aaeb41-7da1-4c04-9bea-2b849453e0ca" 00:09:03.608 ], 00:09:03.608 "product_name": "Malloc disk", 00:09:03.608 "block_size": 4096, 00:09:03.608 "num_blocks": 256, 00:09:03.608 "uuid": "c3aaeb41-7da1-4c04-9bea-2b849453e0ca", 00:09:03.608 "assigned_rate_limits": { 00:09:03.608 "rw_ios_per_sec": 0, 00:09:03.608 "rw_mbytes_per_sec": 0, 00:09:03.608 "r_mbytes_per_sec": 0, 00:09:03.608 "w_mbytes_per_sec": 0 00:09:03.608 }, 00:09:03.608 "claimed": false, 00:09:03.608 "zoned": false, 00:09:03.608 "supported_io_types": { 00:09:03.608 "read": true, 00:09:03.608 "write": true, 00:09:03.608 "unmap": true, 00:09:03.608 "flush": true, 00:09:03.608 "reset": true, 00:09:03.608 "nvme_admin": false, 00:09:03.608 "nvme_io": false, 00:09:03.608 "nvme_io_md": false, 00:09:03.608 "write_zeroes": true, 00:09:03.608 "zcopy": true, 00:09:03.608 "get_zone_info": false, 00:09:03.608 "zone_management": false, 00:09:03.608 "zone_append": false, 00:09:03.608 "compare": false, 00:09:03.608 "compare_and_write": false, 00:09:03.608 "abort": true, 00:09:03.608 "seek_hole": false, 00:09:03.608 "seek_data": false, 00:09:03.608 "copy": true, 00:09:03.608 "nvme_iov_md": false 00:09:03.608 }, 00:09:03.608 "memory_domains": [ 00:09:03.608 { 00:09:03.608 "dma_device_id": "system", 00:09:03.608 "dma_device_type": 1 00:09:03.608 }, 00:09:03.608 { 00:09:03.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.608 "dma_device_type": 2 00:09:03.608 } 00:09:03.608 ], 00:09:03.608 "driver_specific": {} 00:09:03.608 } 00:09:03.608 ]' 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.608 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.608 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:03.609 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.609 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.609 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.609 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:03.609 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:03.609 ************************************ 00:09:03.609 END TEST rpc_plugins 00:09:03.609 ************************************ 00:09:03.609 13:34:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:03.609 00:09:03.609 real 0m0.174s 00:09:03.609 user 0m0.103s 00:09:03.609 sys 0m0.026s 00:09:03.609 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.609 13:34:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:03.868 13:34:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:03.868 13:34:57 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.868 13:34:57 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.868 13:34:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.868 ************************************ 00:09:03.868 START TEST rpc_trace_cmd_test 00:09:03.868 ************************************ 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:03.868 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57922", 00:09:03.868 "tpoint_group_mask": "0x8", 00:09:03.868 "iscsi_conn": { 00:09:03.868 "mask": "0x2", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "scsi": { 00:09:03.868 "mask": "0x4", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "bdev": { 00:09:03.868 "mask": "0x8", 00:09:03.868 "tpoint_mask": "0xffffffffffffffff" 00:09:03.868 }, 00:09:03.868 "nvmf_rdma": { 00:09:03.868 "mask": "0x10", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "nvmf_tcp": { 00:09:03.868 "mask": "0x20", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "ftl": { 00:09:03.868 "mask": "0x40", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "blobfs": { 00:09:03.868 "mask": "0x80", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "dsa": { 00:09:03.868 "mask": "0x200", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "thread": { 00:09:03.868 "mask": "0x400", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "nvme_pcie": { 00:09:03.868 "mask": "0x800", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "iaa": { 00:09:03.868 "mask": "0x1000", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "nvme_tcp": { 00:09:03.868 "mask": "0x2000", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "bdev_nvme": { 00:09:03.868 "mask": "0x4000", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "sock": { 00:09:03.868 "mask": "0x8000", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "blob": { 00:09:03.868 "mask": "0x10000", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "bdev_raid": { 00:09:03.868 "mask": "0x20000", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 }, 00:09:03.868 "scheduler": { 00:09:03.868 "mask": "0x40000", 00:09:03.868 "tpoint_mask": "0x0" 00:09:03.868 } 00:09:03.868 }' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:03.868 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:04.127 ************************************ 00:09:04.127 END TEST rpc_trace_cmd_test 00:09:04.127 ************************************ 00:09:04.127 13:34:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:04.127 00:09:04.127 real 0m0.236s 00:09:04.127 user 0m0.199s 00:09:04.127 sys 0m0.030s 00:09:04.127 13:34:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.127 13:34:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.127 13:34:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:04.127 13:34:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:04.127 13:34:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:04.127 13:34:57 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:04.127 13:34:57 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.127 13:34:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.127 ************************************ 00:09:04.127 START TEST rpc_daemon_integrity 00:09:04.127 ************************************ 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.127 13:34:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.127 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.127 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:04.127 { 00:09:04.127 "name": "Malloc2", 00:09:04.127 "aliases": [ 00:09:04.127 "1dfd1f47-c65a-418c-b293-a58b6af4a1a9" 00:09:04.127 ], 00:09:04.127 "product_name": "Malloc disk", 00:09:04.127 "block_size": 512, 00:09:04.127 "num_blocks": 16384, 00:09:04.127 "uuid": "1dfd1f47-c65a-418c-b293-a58b6af4a1a9", 00:09:04.127 "assigned_rate_limits": { 00:09:04.127 "rw_ios_per_sec": 0, 00:09:04.127 "rw_mbytes_per_sec": 0, 00:09:04.127 "r_mbytes_per_sec": 0, 00:09:04.127 "w_mbytes_per_sec": 0 00:09:04.127 }, 00:09:04.127 "claimed": false, 00:09:04.127 "zoned": false, 00:09:04.127 "supported_io_types": { 00:09:04.127 "read": true, 00:09:04.127 "write": true, 00:09:04.127 "unmap": true, 00:09:04.127 "flush": true, 00:09:04.127 "reset": true, 00:09:04.127 "nvme_admin": false, 00:09:04.127 "nvme_io": false, 00:09:04.127 "nvme_io_md": false, 00:09:04.127 "write_zeroes": true, 00:09:04.127 "zcopy": true, 00:09:04.127 "get_zone_info": false, 00:09:04.127 "zone_management": false, 00:09:04.127 "zone_append": false, 00:09:04.127 "compare": false, 00:09:04.127 "compare_and_write": false, 00:09:04.127 "abort": true, 00:09:04.127 "seek_hole": false, 00:09:04.127 "seek_data": false, 00:09:04.127 "copy": true, 00:09:04.127 "nvme_iov_md": false 00:09:04.127 }, 00:09:04.127 "memory_domains": [ 00:09:04.127 { 00:09:04.127 "dma_device_id": "system", 00:09:04.127 "dma_device_type": 1 00:09:04.127 }, 00:09:04.128 { 00:09:04.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.128 "dma_device_type": 2 00:09:04.128 } 00:09:04.128 ], 00:09:04.128 "driver_specific": {} 00:09:04.128 } 00:09:04.128 ]' 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.128 [2024-11-06 13:34:58.067134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:04.128 [2024-11-06 13:34:58.067221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.128 [2024-11-06 13:34:58.067252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:04.128 [2024-11-06 13:34:58.067269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.128 [2024-11-06 13:34:58.070332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.128 [2024-11-06 13:34:58.070388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:04.128 Passthru0 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.128 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:04.128 { 00:09:04.128 "name": "Malloc2", 00:09:04.128 "aliases": [ 00:09:04.128 "1dfd1f47-c65a-418c-b293-a58b6af4a1a9" 00:09:04.128 ], 00:09:04.128 "product_name": "Malloc disk", 00:09:04.128 "block_size": 512, 00:09:04.128 "num_blocks": 16384, 00:09:04.128 "uuid": "1dfd1f47-c65a-418c-b293-a58b6af4a1a9", 00:09:04.128 "assigned_rate_limits": { 00:09:04.128 "rw_ios_per_sec": 0, 00:09:04.128 "rw_mbytes_per_sec": 0, 00:09:04.128 "r_mbytes_per_sec": 0, 00:09:04.128 "w_mbytes_per_sec": 0 00:09:04.128 }, 00:09:04.128 "claimed": true, 00:09:04.128 "claim_type": "exclusive_write", 00:09:04.128 "zoned": false, 00:09:04.128 "supported_io_types": { 00:09:04.128 "read": true, 00:09:04.128 "write": true, 00:09:04.128 "unmap": true, 00:09:04.128 "flush": true, 00:09:04.128 "reset": true, 00:09:04.128 "nvme_admin": false, 00:09:04.128 "nvme_io": false, 00:09:04.128 "nvme_io_md": false, 00:09:04.128 "write_zeroes": true, 00:09:04.128 "zcopy": true, 00:09:04.128 "get_zone_info": false, 00:09:04.128 "zone_management": false, 00:09:04.128 "zone_append": false, 00:09:04.128 "compare": false, 00:09:04.128 "compare_and_write": false, 00:09:04.128 "abort": true, 00:09:04.128 "seek_hole": false, 00:09:04.128 "seek_data": false, 00:09:04.128 "copy": true, 00:09:04.128 "nvme_iov_md": false 00:09:04.128 }, 00:09:04.128 "memory_domains": [ 00:09:04.128 { 00:09:04.128 "dma_device_id": "system", 00:09:04.128 "dma_device_type": 1 00:09:04.128 }, 00:09:04.128 { 00:09:04.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.128 "dma_device_type": 2 00:09:04.128 } 00:09:04.128 ], 00:09:04.128 "driver_specific": {} 00:09:04.128 }, 00:09:04.128 { 00:09:04.128 "name": "Passthru0", 00:09:04.128 "aliases": [ 00:09:04.128 "78220a46-f948-520f-8401-a5cbe464f7a4" 00:09:04.128 ], 00:09:04.128 "product_name": "passthru", 00:09:04.128 "block_size": 512, 00:09:04.128 "num_blocks": 16384, 00:09:04.128 "uuid": "78220a46-f948-520f-8401-a5cbe464f7a4", 00:09:04.128 "assigned_rate_limits": { 00:09:04.128 "rw_ios_per_sec": 0, 00:09:04.128 "rw_mbytes_per_sec": 0, 00:09:04.128 "r_mbytes_per_sec": 0, 00:09:04.128 "w_mbytes_per_sec": 0 00:09:04.128 }, 00:09:04.128 "claimed": false, 00:09:04.128 "zoned": false, 00:09:04.128 "supported_io_types": { 00:09:04.128 "read": true, 00:09:04.128 "write": true, 00:09:04.128 "unmap": true, 00:09:04.128 "flush": true, 00:09:04.128 "reset": true, 00:09:04.128 "nvme_admin": false, 00:09:04.128 "nvme_io": false, 00:09:04.128 "nvme_io_md": false, 00:09:04.128 "write_zeroes": true, 00:09:04.128 "zcopy": true, 00:09:04.128 "get_zone_info": false, 00:09:04.128 "zone_management": false, 00:09:04.128 "zone_append": false, 00:09:04.128 "compare": false, 00:09:04.128 "compare_and_write": false, 00:09:04.128 "abort": true, 00:09:04.128 "seek_hole": false, 00:09:04.128 "seek_data": false, 00:09:04.128 "copy": true, 00:09:04.128 "nvme_iov_md": false 00:09:04.128 }, 00:09:04.128 "memory_domains": [ 00:09:04.128 { 00:09:04.128 "dma_device_id": "system", 00:09:04.128 "dma_device_type": 1 00:09:04.128 }, 00:09:04.128 { 00:09:04.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.128 "dma_device_type": 2 00:09:04.128 } 00:09:04.128 ], 00:09:04.128 "driver_specific": { 00:09:04.128 "passthru": { 00:09:04.128 "name": "Passthru0", 00:09:04.128 "base_bdev_name": "Malloc2" 00:09:04.128 } 00:09:04.128 } 00:09:04.128 } 00:09:04.128 ]' 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:04.386 ************************************ 00:09:04.386 END TEST rpc_daemon_integrity 00:09:04.386 ************************************ 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:04.386 00:09:04.386 real 0m0.329s 00:09:04.386 user 0m0.175s 00:09:04.386 sys 0m0.053s 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.386 13:34:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.386 13:34:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:04.386 13:34:58 rpc -- rpc/rpc.sh@84 -- # killprocess 57922 00:09:04.386 13:34:58 rpc -- common/autotest_common.sh@952 -- # '[' -z 57922 ']' 00:09:04.386 13:34:58 rpc -- common/autotest_common.sh@956 -- # kill -0 57922 00:09:04.386 13:34:58 rpc -- common/autotest_common.sh@957 -- # uname 00:09:04.386 13:34:58 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:04.386 13:34:58 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57922 00:09:04.386 killing process with pid 57922 00:09:04.386 13:34:58 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:04.386 13:34:58 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:04.387 13:34:58 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57922' 00:09:04.387 13:34:58 rpc -- common/autotest_common.sh@971 -- # kill 57922 00:09:04.387 13:34:58 rpc -- common/autotest_common.sh@976 -- # wait 57922 00:09:07.672 00:09:07.672 real 0m6.023s 00:09:07.672 user 0m6.671s 00:09:07.672 sys 0m0.947s 00:09:07.672 ************************************ 00:09:07.672 END TEST rpc 00:09:07.672 ************************************ 00:09:07.672 13:35:01 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.672 13:35:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.672 13:35:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:07.672 13:35:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:07.672 13:35:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.672 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:09:07.672 ************************************ 00:09:07.672 START TEST skip_rpc 00:09:07.672 ************************************ 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:07.672 * Looking for test storage... 00:09:07.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.672 13:35:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.672 --rc genhtml_branch_coverage=1 00:09:07.672 --rc genhtml_function_coverage=1 00:09:07.672 --rc genhtml_legend=1 00:09:07.672 --rc geninfo_all_blocks=1 00:09:07.672 --rc geninfo_unexecuted_blocks=1 00:09:07.672 00:09:07.672 ' 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.672 --rc genhtml_branch_coverage=1 00:09:07.672 --rc genhtml_function_coverage=1 00:09:07.672 --rc genhtml_legend=1 00:09:07.672 --rc geninfo_all_blocks=1 00:09:07.672 --rc geninfo_unexecuted_blocks=1 00:09:07.672 00:09:07.672 ' 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.672 --rc genhtml_branch_coverage=1 00:09:07.672 --rc genhtml_function_coverage=1 00:09:07.672 --rc genhtml_legend=1 00:09:07.672 --rc geninfo_all_blocks=1 00:09:07.672 --rc geninfo_unexecuted_blocks=1 00:09:07.672 00:09:07.672 ' 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.672 --rc genhtml_branch_coverage=1 00:09:07.672 --rc genhtml_function_coverage=1 00:09:07.672 --rc genhtml_legend=1 00:09:07.672 --rc geninfo_all_blocks=1 00:09:07.672 --rc geninfo_unexecuted_blocks=1 00:09:07.672 00:09:07.672 ' 00:09:07.672 13:35:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:07.672 13:35:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:07.672 13:35:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.672 13:35:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.672 ************************************ 00:09:07.672 START TEST skip_rpc 00:09:07.672 ************************************ 00:09:07.672 13:35:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:09:07.672 13:35:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58162 00:09:07.672 13:35:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:07.672 13:35:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:07.672 13:35:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:07.672 [2024-11-06 13:35:01.555617] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:07.672 [2024-11-06 13:35:01.556013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58162 ] 00:09:07.930 [2024-11-06 13:35:01.758855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.201 [2024-11-06 13:35:01.947285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.434 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58162 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58162 ']' 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58162 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58162 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58162' 00:09:12.693 killing process with pid 58162 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58162 00:09:12.693 13:35:06 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58162 00:09:15.976 ************************************ 00:09:15.976 END TEST skip_rpc 00:09:15.976 ************************************ 00:09:15.976 00:09:15.976 real 0m7.929s 00:09:15.976 user 0m7.370s 00:09:15.976 sys 0m0.455s 00:09:15.976 13:35:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.976 13:35:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.976 13:35:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:15.976 13:35:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:15.976 13:35:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.976 13:35:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.976 ************************************ 00:09:15.976 START TEST skip_rpc_with_json 00:09:15.976 ************************************ 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58272 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58272 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58272 ']' 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:15.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:15.976 13:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:15.976 [2024-11-06 13:35:09.545188] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:15.976 [2024-11-06 13:35:09.546354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ] 00:09:15.976 [2024-11-06 13:35:09.750336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.976 [2024-11-06 13:35:09.885921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:16.910 [2024-11-06 13:35:10.884420] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:16.910 request: 00:09:16.910 { 00:09:16.910 "trtype": "tcp", 00:09:16.910 "method": "nvmf_get_transports", 00:09:16.910 "req_id": 1 00:09:16.910 } 00:09:16.910 Got JSON-RPC error response 00:09:16.910 response: 00:09:16.910 { 00:09:16.910 "code": -19, 00:09:16.910 "message": "No such device" 00:09:16.910 } 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.910 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:17.168 [2024-11-06 13:35:10.896582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.168 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.168 13:35:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:17.168 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.168 13:35:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:17.168 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.168 13:35:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:17.168 { 00:09:17.168 "subsystems": [ 00:09:17.168 { 00:09:17.168 "subsystem": "fsdev", 00:09:17.168 "config": [ 00:09:17.168 { 00:09:17.168 "method": "fsdev_set_opts", 00:09:17.168 "params": { 00:09:17.168 "fsdev_io_pool_size": 65535, 00:09:17.168 "fsdev_io_cache_size": 256 00:09:17.168 } 00:09:17.168 } 00:09:17.168 ] 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "subsystem": "keyring", 00:09:17.168 "config": [] 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "subsystem": "iobuf", 00:09:17.168 "config": [ 00:09:17.168 { 00:09:17.168 "method": "iobuf_set_options", 00:09:17.168 "params": { 00:09:17.168 "small_pool_count": 8192, 00:09:17.168 "large_pool_count": 1024, 00:09:17.168 "small_bufsize": 8192, 00:09:17.168 "large_bufsize": 135168, 00:09:17.168 "enable_numa": false 00:09:17.168 } 00:09:17.168 } 00:09:17.168 ] 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "subsystem": "sock", 00:09:17.168 "config": [ 00:09:17.168 { 00:09:17.168 "method": "sock_set_default_impl", 00:09:17.168 "params": { 00:09:17.168 "impl_name": "posix" 00:09:17.168 } 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "method": "sock_impl_set_options", 00:09:17.168 "params": { 00:09:17.168 "impl_name": "ssl", 00:09:17.168 "recv_buf_size": 4096, 00:09:17.168 "send_buf_size": 4096, 00:09:17.168 "enable_recv_pipe": true, 00:09:17.168 "enable_quickack": false, 00:09:17.168 "enable_placement_id": 0, 00:09:17.168 "enable_zerocopy_send_server": true, 00:09:17.168 "enable_zerocopy_send_client": false, 00:09:17.168 "zerocopy_threshold": 0, 00:09:17.168 "tls_version": 0, 00:09:17.168 "enable_ktls": false 00:09:17.168 } 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "method": "sock_impl_set_options", 00:09:17.168 "params": { 00:09:17.168 "impl_name": "posix", 00:09:17.168 "recv_buf_size": 2097152, 00:09:17.168 "send_buf_size": 2097152, 00:09:17.168 "enable_recv_pipe": true, 00:09:17.168 "enable_quickack": false, 00:09:17.168 "enable_placement_id": 0, 00:09:17.168 "enable_zerocopy_send_server": true, 00:09:17.168 "enable_zerocopy_send_client": false, 00:09:17.168 "zerocopy_threshold": 0, 00:09:17.168 "tls_version": 0, 00:09:17.168 "enable_ktls": false 00:09:17.168 } 00:09:17.168 } 00:09:17.168 ] 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "subsystem": "vmd", 00:09:17.168 "config": [] 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "subsystem": "accel", 00:09:17.168 "config": [ 00:09:17.168 { 00:09:17.168 "method": "accel_set_options", 00:09:17.168 "params": { 00:09:17.168 "small_cache_size": 128, 00:09:17.168 "large_cache_size": 16, 00:09:17.168 "task_count": 2048, 00:09:17.168 "sequence_count": 2048, 00:09:17.168 "buf_count": 2048 00:09:17.168 } 00:09:17.168 } 00:09:17.168 ] 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "subsystem": "bdev", 00:09:17.168 "config": [ 00:09:17.168 { 00:09:17.168 "method": "bdev_set_options", 00:09:17.168 "params": { 00:09:17.168 "bdev_io_pool_size": 65535, 00:09:17.168 "bdev_io_cache_size": 256, 00:09:17.168 "bdev_auto_examine": true, 00:09:17.168 "iobuf_small_cache_size": 128, 00:09:17.168 "iobuf_large_cache_size": 16 00:09:17.168 } 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "method": "bdev_raid_set_options", 00:09:17.168 "params": { 00:09:17.168 "process_window_size_kb": 1024, 00:09:17.168 "process_max_bandwidth_mb_sec": 0 00:09:17.168 } 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "method": "bdev_iscsi_set_options", 00:09:17.168 "params": { 00:09:17.168 "timeout_sec": 30 00:09:17.168 } 00:09:17.168 }, 00:09:17.168 { 00:09:17.168 "method": "bdev_nvme_set_options", 00:09:17.168 "params": { 00:09:17.168 "action_on_timeout": "none", 00:09:17.168 "timeout_us": 0, 00:09:17.168 "timeout_admin_us": 0, 00:09:17.168 "keep_alive_timeout_ms": 10000, 00:09:17.168 "arbitration_burst": 0, 00:09:17.168 "low_priority_weight": 0, 00:09:17.168 "medium_priority_weight": 0, 00:09:17.168 "high_priority_weight": 0, 00:09:17.168 "nvme_adminq_poll_period_us": 10000, 00:09:17.168 "nvme_ioq_poll_period_us": 0, 00:09:17.168 "io_queue_requests": 0, 00:09:17.168 "delay_cmd_submit": true, 00:09:17.168 "transport_retry_count": 4, 00:09:17.168 "bdev_retry_count": 3, 00:09:17.168 "transport_ack_timeout": 0, 00:09:17.168 "ctrlr_loss_timeout_sec": 0, 00:09:17.168 "reconnect_delay_sec": 0, 00:09:17.168 "fast_io_fail_timeout_sec": 0, 00:09:17.168 "disable_auto_failback": false, 00:09:17.168 "generate_uuids": false, 00:09:17.168 "transport_tos": 0, 00:09:17.168 "nvme_error_stat": false, 00:09:17.168 "rdma_srq_size": 0, 00:09:17.168 "io_path_stat": false, 00:09:17.168 "allow_accel_sequence": false, 00:09:17.168 "rdma_max_cq_size": 0, 00:09:17.169 "rdma_cm_event_timeout_ms": 0, 00:09:17.169 "dhchap_digests": [ 00:09:17.169 "sha256", 00:09:17.169 "sha384", 00:09:17.169 "sha512" 00:09:17.169 ], 00:09:17.169 "dhchap_dhgroups": [ 00:09:17.169 "null", 00:09:17.169 "ffdhe2048", 00:09:17.169 "ffdhe3072", 00:09:17.169 "ffdhe4096", 00:09:17.169 "ffdhe6144", 00:09:17.169 "ffdhe8192" 00:09:17.169 ] 00:09:17.169 } 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "method": "bdev_nvme_set_hotplug", 00:09:17.169 "params": { 00:09:17.169 "period_us": 100000, 00:09:17.169 "enable": false 00:09:17.169 } 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "method": "bdev_wait_for_examine" 00:09:17.169 } 00:09:17.169 ] 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "scsi", 00:09:17.169 "config": null 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "scheduler", 00:09:17.169 "config": [ 00:09:17.169 { 00:09:17.169 "method": "framework_set_scheduler", 00:09:17.169 "params": { 00:09:17.169 "name": "static" 00:09:17.169 } 00:09:17.169 } 00:09:17.169 ] 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "vhost_scsi", 00:09:17.169 "config": [] 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "vhost_blk", 00:09:17.169 "config": [] 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "ublk", 00:09:17.169 "config": [] 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "nbd", 00:09:17.169 "config": [] 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "nvmf", 00:09:17.169 "config": [ 00:09:17.169 { 00:09:17.169 "method": "nvmf_set_config", 00:09:17.169 "params": { 00:09:17.169 "discovery_filter": "match_any", 00:09:17.169 "admin_cmd_passthru": { 00:09:17.169 "identify_ctrlr": false 00:09:17.169 }, 00:09:17.169 "dhchap_digests": [ 00:09:17.169 "sha256", 00:09:17.169 "sha384", 00:09:17.169 "sha512" 00:09:17.169 ], 00:09:17.169 "dhchap_dhgroups": [ 00:09:17.169 "null", 00:09:17.169 "ffdhe2048", 00:09:17.169 "ffdhe3072", 00:09:17.169 "ffdhe4096", 00:09:17.169 "ffdhe6144", 00:09:17.169 "ffdhe8192" 00:09:17.169 ] 00:09:17.169 } 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "method": "nvmf_set_max_subsystems", 00:09:17.169 "params": { 00:09:17.169 "max_subsystems": 1024 00:09:17.169 } 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "method": "nvmf_set_crdt", 00:09:17.169 "params": { 00:09:17.169 "crdt1": 0, 00:09:17.169 "crdt2": 0, 00:09:17.169 "crdt3": 0 00:09:17.169 } 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "method": "nvmf_create_transport", 00:09:17.169 "params": { 00:09:17.169 "trtype": "TCP", 00:09:17.169 "max_queue_depth": 128, 00:09:17.169 "max_io_qpairs_per_ctrlr": 127, 00:09:17.169 "in_capsule_data_size": 4096, 00:09:17.169 "max_io_size": 131072, 00:09:17.169 "io_unit_size": 131072, 00:09:17.169 "max_aq_depth": 128, 00:09:17.169 "num_shared_buffers": 511, 00:09:17.169 "buf_cache_size": 4294967295, 00:09:17.169 "dif_insert_or_strip": false, 00:09:17.169 "zcopy": false, 00:09:17.169 "c2h_success": true, 00:09:17.169 "sock_priority": 0, 00:09:17.169 "abort_timeout_sec": 1, 00:09:17.169 "ack_timeout": 0, 00:09:17.169 "data_wr_pool_size": 0 00:09:17.169 } 00:09:17.169 } 00:09:17.169 ] 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "subsystem": "iscsi", 00:09:17.169 "config": [ 00:09:17.169 { 00:09:17.169 "method": "iscsi_set_options", 00:09:17.169 "params": { 00:09:17.169 "node_base": "iqn.2016-06.io.spdk", 00:09:17.169 "max_sessions": 128, 00:09:17.169 "max_connections_per_session": 2, 00:09:17.169 "max_queue_depth": 64, 00:09:17.169 "default_time2wait": 2, 00:09:17.169 "default_time2retain": 20, 00:09:17.169 "first_burst_length": 8192, 00:09:17.169 "immediate_data": true, 00:09:17.169 "allow_duplicated_isid": false, 00:09:17.169 "error_recovery_level": 0, 00:09:17.169 "nop_timeout": 60, 00:09:17.169 "nop_in_interval": 30, 00:09:17.169 "disable_chap": false, 00:09:17.169 "require_chap": false, 00:09:17.169 "mutual_chap": false, 00:09:17.169 "chap_group": 0, 00:09:17.169 "max_large_datain_per_connection": 64, 00:09:17.169 "max_r2t_per_connection": 4, 00:09:17.169 "pdu_pool_size": 36864, 00:09:17.169 "immediate_data_pool_size": 16384, 00:09:17.169 "data_out_pool_size": 2048 00:09:17.169 } 00:09:17.169 } 00:09:17.169 ] 00:09:17.169 } 00:09:17.169 ] 00:09:17.169 } 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58272 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58272 ']' 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58272 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58272 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:17.169 killing process with pid 58272 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58272' 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58272 00:09:17.169 13:35:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58272 00:09:20.452 13:35:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58339 00:09:20.452 13:35:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:20.452 13:35:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58339 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58339 ']' 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58339 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58339 00:09:25.825 killing process with pid 58339 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58339' 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58339 00:09:25.825 13:35:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58339 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:28.358 ************************************ 00:09:28.358 END TEST skip_rpc_with_json 00:09:28.358 ************************************ 00:09:28.358 00:09:28.358 real 0m12.647s 00:09:28.358 user 0m11.976s 00:09:28.358 sys 0m0.998s 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:28.358 13:35:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:28.358 13:35:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:28.358 13:35:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.358 13:35:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.358 ************************************ 00:09:28.358 START TEST skip_rpc_with_delay 00:09:28.358 ************************************ 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:28.358 [2024-11-06 13:35:22.240425] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:28.358 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:28.358 00:09:28.358 real 0m0.215s 00:09:28.359 user 0m0.118s 00:09:28.359 sys 0m0.095s 00:09:28.359 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.359 ************************************ 00:09:28.359 END TEST skip_rpc_with_delay 00:09:28.359 ************************************ 00:09:28.359 13:35:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:28.616 13:35:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:28.616 13:35:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:28.616 13:35:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:28.616 13:35:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:28.616 13:35:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.616 13:35:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.617 ************************************ 00:09:28.617 START TEST exit_on_failed_rpc_init 00:09:28.617 ************************************ 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58478 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58478 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58478 ']' 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.617 13:35:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:28.617 [2024-11-06 13:35:22.518546] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:28.617 [2024-11-06 13:35:22.518738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58478 ] 00:09:28.875 [2024-11-06 13:35:22.712349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.133 [2024-11-06 13:35:22.865619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:30.067 13:35:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:30.326 [2024-11-06 13:35:24.122090] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:30.326 [2024-11-06 13:35:24.122589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58496 ] 00:09:30.584 [2024-11-06 13:35:24.314851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.584 [2024-11-06 13:35:24.454919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.584 [2024-11-06 13:35:24.455268] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:30.584 [2024-11-06 13:35:24.455297] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:30.584 [2024-11-06 13:35:24.455321] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58478 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58478 ']' 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58478 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58478 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:30.842 killing process with pid 58478 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58478' 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58478 00:09:30.842 13:35:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58478 00:09:34.156 00:09:34.156 real 0m5.191s 00:09:34.156 user 0m5.566s 00:09:34.156 sys 0m0.741s 00:09:34.156 13:35:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.156 ************************************ 00:09:34.156 END TEST exit_on_failed_rpc_init 00:09:34.156 ************************************ 00:09:34.156 13:35:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:34.156 13:35:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:34.156 ************************************ 00:09:34.156 END TEST skip_rpc 00:09:34.156 ************************************ 00:09:34.156 00:09:34.156 real 0m26.407s 00:09:34.156 user 0m25.228s 00:09:34.156 sys 0m2.521s 00:09:34.156 13:35:27 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.156 13:35:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.156 13:35:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:34.156 13:35:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:34.156 13:35:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.157 13:35:27 -- common/autotest_common.sh@10 -- # set +x 00:09:34.157 ************************************ 00:09:34.157 START TEST rpc_client 00:09:34.157 ************************************ 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:34.157 * Looking for test storage... 00:09:34.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.157 13:35:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.157 --rc genhtml_branch_coverage=1 00:09:34.157 --rc genhtml_function_coverage=1 00:09:34.157 --rc genhtml_legend=1 00:09:34.157 --rc geninfo_all_blocks=1 00:09:34.157 --rc geninfo_unexecuted_blocks=1 00:09:34.157 00:09:34.157 ' 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.157 --rc genhtml_branch_coverage=1 00:09:34.157 --rc genhtml_function_coverage=1 00:09:34.157 --rc genhtml_legend=1 00:09:34.157 --rc geninfo_all_blocks=1 00:09:34.157 --rc geninfo_unexecuted_blocks=1 00:09:34.157 00:09:34.157 ' 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.157 --rc genhtml_branch_coverage=1 00:09:34.157 --rc genhtml_function_coverage=1 00:09:34.157 --rc genhtml_legend=1 00:09:34.157 --rc geninfo_all_blocks=1 00:09:34.157 --rc geninfo_unexecuted_blocks=1 00:09:34.157 00:09:34.157 ' 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.157 --rc genhtml_branch_coverage=1 00:09:34.157 --rc genhtml_function_coverage=1 00:09:34.157 --rc genhtml_legend=1 00:09:34.157 --rc geninfo_all_blocks=1 00:09:34.157 --rc geninfo_unexecuted_blocks=1 00:09:34.157 00:09:34.157 ' 00:09:34.157 13:35:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:34.157 OK 00:09:34.157 13:35:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:34.157 00:09:34.157 real 0m0.278s 00:09:34.157 user 0m0.137s 00:09:34.157 sys 0m0.153s 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.157 13:35:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:34.157 ************************************ 00:09:34.157 END TEST rpc_client 00:09:34.157 ************************************ 00:09:34.157 13:35:27 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:34.157 13:35:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:34.157 13:35:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.157 13:35:27 -- common/autotest_common.sh@10 -- # set +x 00:09:34.157 ************************************ 00:09:34.157 START TEST json_config 00:09:34.157 ************************************ 00:09:34.157 13:35:28 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:34.157 13:35:28 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.157 13:35:28 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.157 13:35:28 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.418 13:35:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.418 13:35:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.418 13:35:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.418 13:35:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.418 13:35:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.418 13:35:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.418 13:35:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.418 13:35:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:34.418 13:35:28 json_config -- scripts/common.sh@345 -- # : 1 00:09:34.418 13:35:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.418 13:35:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.418 13:35:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:34.418 13:35:28 json_config -- scripts/common.sh@353 -- # local d=1 00:09:34.418 13:35:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.418 13:35:28 json_config -- scripts/common.sh@355 -- # echo 1 00:09:34.418 13:35:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.418 13:35:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@353 -- # local d=2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.418 13:35:28 json_config -- scripts/common.sh@355 -- # echo 2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.418 13:35:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.418 13:35:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.418 13:35:28 json_config -- scripts/common.sh@368 -- # return 0 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.418 --rc genhtml_branch_coverage=1 00:09:34.418 --rc genhtml_function_coverage=1 00:09:34.418 --rc genhtml_legend=1 00:09:34.418 --rc geninfo_all_blocks=1 00:09:34.418 --rc geninfo_unexecuted_blocks=1 00:09:34.418 00:09:34.418 ' 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.418 --rc genhtml_branch_coverage=1 00:09:34.418 --rc genhtml_function_coverage=1 00:09:34.418 --rc genhtml_legend=1 00:09:34.418 --rc geninfo_all_blocks=1 00:09:34.418 --rc geninfo_unexecuted_blocks=1 00:09:34.418 00:09:34.418 ' 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.418 --rc genhtml_branch_coverage=1 00:09:34.418 --rc genhtml_function_coverage=1 00:09:34.418 --rc genhtml_legend=1 00:09:34.418 --rc geninfo_all_blocks=1 00:09:34.418 --rc geninfo_unexecuted_blocks=1 00:09:34.418 00:09:34.418 ' 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.418 --rc genhtml_branch_coverage=1 00:09:34.418 --rc genhtml_function_coverage=1 00:09:34.418 --rc genhtml_legend=1 00:09:34.418 --rc geninfo_all_blocks=1 00:09:34.418 --rc geninfo_unexecuted_blocks=1 00:09:34.418 00:09:34.418 ' 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5bc0c953-5082-4147-bb80-66cd1b39e61f 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5bc0c953-5082-4147-bb80-66cd1b39e61f 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.418 13:35:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.418 13:35:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.418 13:35:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.418 13:35:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.418 13:35:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.418 13:35:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.418 13:35:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.418 13:35:28 json_config -- paths/export.sh@5 -- # export PATH 00:09:34.418 13:35:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@51 -- # : 0 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.418 13:35:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:34.418 WARNING: No tests are enabled so not running JSON configuration tests 00:09:34.418 13:35:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:34.418 ************************************ 00:09:34.418 END TEST json_config 00:09:34.418 ************************************ 00:09:34.418 00:09:34.418 real 0m0.204s 00:09:34.418 user 0m0.128s 00:09:34.418 sys 0m0.076s 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.418 13:35:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:34.418 13:35:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:34.419 13:35:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:34.419 13:35:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.419 13:35:28 -- common/autotest_common.sh@10 -- # set +x 00:09:34.419 ************************************ 00:09:34.419 START TEST json_config_extra_key 00:09:34.419 ************************************ 00:09:34.419 13:35:28 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:34.419 13:35:28 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.419 13:35:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.419 13:35:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.679 13:35:28 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:34.679 13:35:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.679 13:35:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.679 --rc genhtml_branch_coverage=1 00:09:34.679 --rc genhtml_function_coverage=1 00:09:34.679 --rc genhtml_legend=1 00:09:34.679 --rc geninfo_all_blocks=1 00:09:34.679 --rc geninfo_unexecuted_blocks=1 00:09:34.679 00:09:34.679 ' 00:09:34.679 13:35:28 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.679 --rc genhtml_branch_coverage=1 00:09:34.679 --rc genhtml_function_coverage=1 00:09:34.679 --rc genhtml_legend=1 00:09:34.679 --rc geninfo_all_blocks=1 00:09:34.679 --rc geninfo_unexecuted_blocks=1 00:09:34.679 00:09:34.679 ' 00:09:34.679 13:35:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.679 --rc genhtml_branch_coverage=1 00:09:34.679 --rc genhtml_function_coverage=1 00:09:34.679 --rc genhtml_legend=1 00:09:34.679 --rc geninfo_all_blocks=1 00:09:34.679 --rc geninfo_unexecuted_blocks=1 00:09:34.679 00:09:34.679 ' 00:09:34.679 13:35:28 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.679 --rc genhtml_branch_coverage=1 00:09:34.679 --rc genhtml_function_coverage=1 00:09:34.679 --rc genhtml_legend=1 00:09:34.679 --rc geninfo_all_blocks=1 00:09:34.679 --rc geninfo_unexecuted_blocks=1 00:09:34.679 00:09:34.679 ' 00:09:34.679 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5bc0c953-5082-4147-bb80-66cd1b39e61f 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5bc0c953-5082-4147-bb80-66cd1b39e61f 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.679 13:35:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.679 13:35:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.679 13:35:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.679 13:35:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.679 13:35:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:34.679 13:35:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.679 13:35:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.679 INFO: launching applications... 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:34.680 13:35:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58717 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:34.680 Waiting for target to run... 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58717 /var/tmp/spdk_tgt.sock 00:09:34.680 13:35:28 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58717 ']' 00:09:34.680 13:35:28 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:34.680 13:35:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:34.680 13:35:28 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:34.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:34.680 13:35:28 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:34.680 13:35:28 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:34.680 13:35:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:34.680 [2024-11-06 13:35:28.587308] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:34.680 [2024-11-06 13:35:28.587837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58717 ] 00:09:35.247 [2024-11-06 13:35:28.994152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.247 [2024-11-06 13:35:29.155197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.188 13:35:30 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:36.188 13:35:30 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:36.188 00:09:36.188 INFO: shutting down applications... 00:09:36.188 13:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:36.188 13:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58717 ]] 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58717 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:36.188 13:35:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:36.754 13:35:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:36.754 13:35:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:36.754 13:35:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:36.754 13:35:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:37.349 13:35:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:37.349 13:35:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:37.349 13:35:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:37.349 13:35:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:37.607 13:35:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:37.607 13:35:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:37.607 13:35:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:37.607 13:35:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:38.174 13:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:38.174 13:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:38.174 13:35:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:38.174 13:35:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:38.740 13:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:38.740 13:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:38.740 13:35:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:38.740 13:35:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:39.305 13:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:39.305 13:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:39.305 13:35:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:39.305 13:35:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:39.872 13:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:39.872 13:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:39.872 13:35:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:09:39.872 13:35:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:39.872 13:35:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:39.872 13:35:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:39.872 SPDK target shutdown done 00:09:39.872 13:35:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:39.872 Success 00:09:39.872 13:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:39.872 00:09:39.872 real 0m5.280s 00:09:39.872 user 0m4.785s 00:09:39.872 sys 0m0.615s 00:09:39.872 ************************************ 00:09:39.872 END TEST json_config_extra_key 00:09:39.872 ************************************ 00:09:39.872 13:35:33 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:39.872 13:35:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:39.872 13:35:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:39.872 13:35:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:39.872 13:35:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:39.872 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:09:39.872 ************************************ 00:09:39.872 START TEST alias_rpc 00:09:39.872 ************************************ 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:39.872 * Looking for test storage... 00:09:39.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:39.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.872 13:35:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:39.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.872 --rc genhtml_branch_coverage=1 00:09:39.872 --rc genhtml_function_coverage=1 00:09:39.872 --rc genhtml_legend=1 00:09:39.872 --rc geninfo_all_blocks=1 00:09:39.872 --rc geninfo_unexecuted_blocks=1 00:09:39.872 00:09:39.872 ' 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:39.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.872 --rc genhtml_branch_coverage=1 00:09:39.872 --rc genhtml_function_coverage=1 00:09:39.872 --rc genhtml_legend=1 00:09:39.872 --rc geninfo_all_blocks=1 00:09:39.872 --rc geninfo_unexecuted_blocks=1 00:09:39.872 00:09:39.872 ' 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:39.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.872 --rc genhtml_branch_coverage=1 00:09:39.872 --rc genhtml_function_coverage=1 00:09:39.872 --rc genhtml_legend=1 00:09:39.872 --rc geninfo_all_blocks=1 00:09:39.872 --rc geninfo_unexecuted_blocks=1 00:09:39.872 00:09:39.872 ' 00:09:39.872 13:35:33 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:39.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.872 --rc genhtml_branch_coverage=1 00:09:39.872 --rc genhtml_function_coverage=1 00:09:39.873 --rc genhtml_legend=1 00:09:39.873 --rc geninfo_all_blocks=1 00:09:39.873 --rc geninfo_unexecuted_blocks=1 00:09:39.873 00:09:39.873 ' 00:09:39.873 13:35:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:39.873 13:35:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58835 00:09:39.873 13:35:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58835 00:09:39.873 13:35:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:39.873 13:35:33 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58835 ']' 00:09:39.873 13:35:33 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.873 13:35:33 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:39.873 13:35:33 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.873 13:35:33 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:39.873 13:35:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.131 [2024-11-06 13:35:33.921433] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:40.131 [2024-11-06 13:35:33.922039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58835 ] 00:09:40.131 [2024-11-06 13:35:34.108810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.389 [2024-11-06 13:35:34.254423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.323 13:35:35 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:41.323 13:35:35 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:41.323 13:35:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:41.888 13:35:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58835 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58835 ']' 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58835 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58835 00:09:41.888 killing process with pid 58835 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58835' 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@971 -- # kill 58835 00:09:41.888 13:35:35 alias_rpc -- common/autotest_common.sh@976 -- # wait 58835 00:09:45.167 ************************************ 00:09:45.167 END TEST alias_rpc 00:09:45.167 ************************************ 00:09:45.167 00:09:45.167 real 0m4.944s 00:09:45.167 user 0m5.204s 00:09:45.167 sys 0m0.641s 00:09:45.167 13:35:38 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:45.167 13:35:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.167 13:35:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:45.167 13:35:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:45.167 13:35:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:45.167 13:35:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:45.167 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:09:45.167 ************************************ 00:09:45.167 START TEST spdkcli_tcp 00:09:45.167 ************************************ 00:09:45.167 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:45.167 * Looking for test storage... 00:09:45.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:45.167 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:45.167 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:45.167 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:45.167 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.168 13:35:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.168 --rc genhtml_branch_coverage=1 00:09:45.168 --rc genhtml_function_coverage=1 00:09:45.168 --rc genhtml_legend=1 00:09:45.168 --rc geninfo_all_blocks=1 00:09:45.168 --rc geninfo_unexecuted_blocks=1 00:09:45.168 00:09:45.168 ' 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.168 --rc genhtml_branch_coverage=1 00:09:45.168 --rc genhtml_function_coverage=1 00:09:45.168 --rc genhtml_legend=1 00:09:45.168 --rc geninfo_all_blocks=1 00:09:45.168 --rc geninfo_unexecuted_blocks=1 00:09:45.168 00:09:45.168 ' 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.168 --rc genhtml_branch_coverage=1 00:09:45.168 --rc genhtml_function_coverage=1 00:09:45.168 --rc genhtml_legend=1 00:09:45.168 --rc geninfo_all_blocks=1 00:09:45.168 --rc geninfo_unexecuted_blocks=1 00:09:45.168 00:09:45.168 ' 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.168 --rc genhtml_branch_coverage=1 00:09:45.168 --rc genhtml_function_coverage=1 00:09:45.168 --rc genhtml_legend=1 00:09:45.168 --rc geninfo_all_blocks=1 00:09:45.168 --rc geninfo_unexecuted_blocks=1 00:09:45.168 00:09:45.168 ' 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58953 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:45.168 13:35:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58953 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58953 ']' 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:45.168 13:35:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.168 [2024-11-06 13:35:38.957783] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:45.168 [2024-11-06 13:35:38.958204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:09:45.426 [2024-11-06 13:35:39.165259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.426 [2024-11-06 13:35:39.335590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.426 [2024-11-06 13:35:39.335619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.803 13:35:40 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:46.803 13:35:40 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:09:46.803 13:35:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58970 00:09:46.803 13:35:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:46.803 13:35:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:46.803 [ 00:09:46.803 "bdev_malloc_delete", 00:09:46.803 "bdev_malloc_create", 00:09:46.803 "bdev_null_resize", 00:09:46.803 "bdev_null_delete", 00:09:46.803 "bdev_null_create", 00:09:46.803 "bdev_nvme_cuse_unregister", 00:09:46.803 "bdev_nvme_cuse_register", 00:09:46.803 "bdev_opal_new_user", 00:09:46.803 "bdev_opal_set_lock_state", 00:09:46.803 "bdev_opal_delete", 00:09:46.803 "bdev_opal_get_info", 00:09:46.803 "bdev_opal_create", 00:09:46.803 "bdev_nvme_opal_revert", 00:09:46.803 "bdev_nvme_opal_init", 00:09:46.803 "bdev_nvme_send_cmd", 00:09:46.803 "bdev_nvme_set_keys", 00:09:46.803 "bdev_nvme_get_path_iostat", 00:09:46.803 "bdev_nvme_get_mdns_discovery_info", 00:09:46.803 "bdev_nvme_stop_mdns_discovery", 00:09:46.803 "bdev_nvme_start_mdns_discovery", 00:09:46.803 "bdev_nvme_set_multipath_policy", 00:09:46.804 "bdev_nvme_set_preferred_path", 00:09:46.804 "bdev_nvme_get_io_paths", 00:09:46.804 "bdev_nvme_remove_error_injection", 00:09:46.804 "bdev_nvme_add_error_injection", 00:09:46.804 "bdev_nvme_get_discovery_info", 00:09:46.804 "bdev_nvme_stop_discovery", 00:09:46.804 "bdev_nvme_start_discovery", 00:09:46.804 "bdev_nvme_get_controller_health_info", 00:09:46.804 "bdev_nvme_disable_controller", 00:09:46.804 "bdev_nvme_enable_controller", 00:09:46.804 "bdev_nvme_reset_controller", 00:09:46.804 "bdev_nvme_get_transport_statistics", 00:09:46.804 "bdev_nvme_apply_firmware", 00:09:46.804 "bdev_nvme_detach_controller", 00:09:46.804 "bdev_nvme_get_controllers", 00:09:46.804 "bdev_nvme_attach_controller", 00:09:46.804 "bdev_nvme_set_hotplug", 00:09:46.804 "bdev_nvme_set_options", 00:09:46.804 "bdev_passthru_delete", 00:09:46.804 "bdev_passthru_create", 00:09:46.804 "bdev_lvol_set_parent_bdev", 00:09:46.804 "bdev_lvol_set_parent", 00:09:46.804 "bdev_lvol_check_shallow_copy", 00:09:46.804 "bdev_lvol_start_shallow_copy", 00:09:46.804 "bdev_lvol_grow_lvstore", 00:09:46.804 "bdev_lvol_get_lvols", 00:09:46.804 "bdev_lvol_get_lvstores", 00:09:46.804 "bdev_lvol_delete", 00:09:46.804 "bdev_lvol_set_read_only", 00:09:46.804 "bdev_lvol_resize", 00:09:46.804 "bdev_lvol_decouple_parent", 00:09:46.804 "bdev_lvol_inflate", 00:09:46.804 "bdev_lvol_rename", 00:09:46.804 "bdev_lvol_clone_bdev", 00:09:46.804 "bdev_lvol_clone", 00:09:46.804 "bdev_lvol_snapshot", 00:09:46.804 "bdev_lvol_create", 00:09:46.804 "bdev_lvol_delete_lvstore", 00:09:46.804 "bdev_lvol_rename_lvstore", 00:09:46.804 "bdev_lvol_create_lvstore", 00:09:46.804 "bdev_raid_set_options", 00:09:46.804 "bdev_raid_remove_base_bdev", 00:09:46.804 "bdev_raid_add_base_bdev", 00:09:46.804 "bdev_raid_delete", 00:09:46.804 "bdev_raid_create", 00:09:46.804 "bdev_raid_get_bdevs", 00:09:46.804 "bdev_error_inject_error", 00:09:46.804 "bdev_error_delete", 00:09:46.804 "bdev_error_create", 00:09:46.804 "bdev_split_delete", 00:09:46.804 "bdev_split_create", 00:09:46.804 "bdev_delay_delete", 00:09:46.804 "bdev_delay_create", 00:09:46.804 "bdev_delay_update_latency", 00:09:46.804 "bdev_zone_block_delete", 00:09:46.804 "bdev_zone_block_create", 00:09:46.804 "blobfs_create", 00:09:46.804 "blobfs_detect", 00:09:46.804 "blobfs_set_cache_size", 00:09:46.804 "bdev_xnvme_delete", 00:09:46.804 "bdev_xnvme_create", 00:09:46.804 "bdev_aio_delete", 00:09:46.804 "bdev_aio_rescan", 00:09:46.804 "bdev_aio_create", 00:09:46.804 "bdev_ftl_set_property", 00:09:46.804 "bdev_ftl_get_properties", 00:09:46.804 "bdev_ftl_get_stats", 00:09:46.804 "bdev_ftl_unmap", 00:09:46.804 "bdev_ftl_unload", 00:09:46.804 "bdev_ftl_delete", 00:09:46.804 "bdev_ftl_load", 00:09:46.804 "bdev_ftl_create", 00:09:46.804 "bdev_virtio_attach_controller", 00:09:46.804 "bdev_virtio_scsi_get_devices", 00:09:46.804 "bdev_virtio_detach_controller", 00:09:46.804 "bdev_virtio_blk_set_hotplug", 00:09:46.804 "bdev_iscsi_delete", 00:09:46.804 "bdev_iscsi_create", 00:09:46.804 "bdev_iscsi_set_options", 00:09:46.804 "accel_error_inject_error", 00:09:46.804 "ioat_scan_accel_module", 00:09:46.804 "dsa_scan_accel_module", 00:09:46.804 "iaa_scan_accel_module", 00:09:46.804 "keyring_file_remove_key", 00:09:46.804 "keyring_file_add_key", 00:09:46.804 "keyring_linux_set_options", 00:09:46.804 "fsdev_aio_delete", 00:09:46.804 "fsdev_aio_create", 00:09:46.804 "iscsi_get_histogram", 00:09:46.804 "iscsi_enable_histogram", 00:09:46.804 "iscsi_set_options", 00:09:46.804 "iscsi_get_auth_groups", 00:09:46.804 "iscsi_auth_group_remove_secret", 00:09:46.804 "iscsi_auth_group_add_secret", 00:09:46.804 "iscsi_delete_auth_group", 00:09:46.804 "iscsi_create_auth_group", 00:09:46.804 "iscsi_set_discovery_auth", 00:09:46.804 "iscsi_get_options", 00:09:46.804 "iscsi_target_node_request_logout", 00:09:46.804 "iscsi_target_node_set_redirect", 00:09:46.804 "iscsi_target_node_set_auth", 00:09:46.804 "iscsi_target_node_add_lun", 00:09:46.804 "iscsi_get_stats", 00:09:46.804 "iscsi_get_connections", 00:09:46.804 "iscsi_portal_group_set_auth", 00:09:46.804 "iscsi_start_portal_group", 00:09:46.804 "iscsi_delete_portal_group", 00:09:46.804 "iscsi_create_portal_group", 00:09:46.804 "iscsi_get_portal_groups", 00:09:46.804 "iscsi_delete_target_node", 00:09:46.804 "iscsi_target_node_remove_pg_ig_maps", 00:09:46.804 "iscsi_target_node_add_pg_ig_maps", 00:09:46.804 "iscsi_create_target_node", 00:09:46.804 "iscsi_get_target_nodes", 00:09:46.804 "iscsi_delete_initiator_group", 00:09:46.804 "iscsi_initiator_group_remove_initiators", 00:09:46.804 "iscsi_initiator_group_add_initiators", 00:09:46.804 "iscsi_create_initiator_group", 00:09:46.804 "iscsi_get_initiator_groups", 00:09:46.804 "nvmf_set_crdt", 00:09:46.804 "nvmf_set_config", 00:09:46.804 "nvmf_set_max_subsystems", 00:09:46.804 "nvmf_stop_mdns_prr", 00:09:46.804 "nvmf_publish_mdns_prr", 00:09:46.804 "nvmf_subsystem_get_listeners", 00:09:46.804 "nvmf_subsystem_get_qpairs", 00:09:46.804 "nvmf_subsystem_get_controllers", 00:09:46.804 "nvmf_get_stats", 00:09:46.804 "nvmf_get_transports", 00:09:46.804 "nvmf_create_transport", 00:09:46.804 "nvmf_get_targets", 00:09:46.804 "nvmf_delete_target", 00:09:46.804 "nvmf_create_target", 00:09:46.804 "nvmf_subsystem_allow_any_host", 00:09:46.804 "nvmf_subsystem_set_keys", 00:09:46.804 "nvmf_subsystem_remove_host", 00:09:46.804 "nvmf_subsystem_add_host", 00:09:46.804 "nvmf_ns_remove_host", 00:09:46.804 "nvmf_ns_add_host", 00:09:46.804 "nvmf_subsystem_remove_ns", 00:09:46.804 "nvmf_subsystem_set_ns_ana_group", 00:09:46.804 "nvmf_subsystem_add_ns", 00:09:46.804 "nvmf_subsystem_listener_set_ana_state", 00:09:46.804 "nvmf_discovery_get_referrals", 00:09:46.804 "nvmf_discovery_remove_referral", 00:09:46.804 "nvmf_discovery_add_referral", 00:09:46.804 "nvmf_subsystem_remove_listener", 00:09:46.804 "nvmf_subsystem_add_listener", 00:09:46.804 "nvmf_delete_subsystem", 00:09:46.804 "nvmf_create_subsystem", 00:09:46.804 "nvmf_get_subsystems", 00:09:46.804 "env_dpdk_get_mem_stats", 00:09:46.804 "nbd_get_disks", 00:09:46.804 "nbd_stop_disk", 00:09:46.804 "nbd_start_disk", 00:09:46.804 "ublk_recover_disk", 00:09:46.804 "ublk_get_disks", 00:09:46.804 "ublk_stop_disk", 00:09:46.804 "ublk_start_disk", 00:09:46.804 "ublk_destroy_target", 00:09:46.805 "ublk_create_target", 00:09:46.805 "virtio_blk_create_transport", 00:09:46.805 "virtio_blk_get_transports", 00:09:46.805 "vhost_controller_set_coalescing", 00:09:46.805 "vhost_get_controllers", 00:09:46.805 "vhost_delete_controller", 00:09:46.805 "vhost_create_blk_controller", 00:09:46.805 "vhost_scsi_controller_remove_target", 00:09:46.805 "vhost_scsi_controller_add_target", 00:09:46.805 "vhost_start_scsi_controller", 00:09:46.805 "vhost_create_scsi_controller", 00:09:46.805 "thread_set_cpumask", 00:09:46.805 "scheduler_set_options", 00:09:46.805 "framework_get_governor", 00:09:46.805 "framework_get_scheduler", 00:09:46.805 "framework_set_scheduler", 00:09:46.805 "framework_get_reactors", 00:09:46.805 "thread_get_io_channels", 00:09:46.805 "thread_get_pollers", 00:09:46.805 "thread_get_stats", 00:09:46.805 "framework_monitor_context_switch", 00:09:46.805 "spdk_kill_instance", 00:09:46.805 "log_enable_timestamps", 00:09:46.805 "log_get_flags", 00:09:46.805 "log_clear_flag", 00:09:46.805 "log_set_flag", 00:09:46.805 "log_get_level", 00:09:46.805 "log_set_level", 00:09:46.805 "log_get_print_level", 00:09:46.805 "log_set_print_level", 00:09:46.805 "framework_enable_cpumask_locks", 00:09:46.805 "framework_disable_cpumask_locks", 00:09:46.805 "framework_wait_init", 00:09:46.805 "framework_start_init", 00:09:46.805 "scsi_get_devices", 00:09:46.805 "bdev_get_histogram", 00:09:46.805 "bdev_enable_histogram", 00:09:46.805 "bdev_set_qos_limit", 00:09:46.805 "bdev_set_qd_sampling_period", 00:09:46.805 "bdev_get_bdevs", 00:09:46.805 "bdev_reset_iostat", 00:09:46.805 "bdev_get_iostat", 00:09:46.805 "bdev_examine", 00:09:46.805 "bdev_wait_for_examine", 00:09:46.805 "bdev_set_options", 00:09:46.805 "accel_get_stats", 00:09:46.805 "accel_set_options", 00:09:46.805 "accel_set_driver", 00:09:46.805 "accel_crypto_key_destroy", 00:09:46.805 "accel_crypto_keys_get", 00:09:46.805 "accel_crypto_key_create", 00:09:46.805 "accel_assign_opc", 00:09:46.805 "accel_get_module_info", 00:09:46.805 "accel_get_opc_assignments", 00:09:46.805 "vmd_rescan", 00:09:46.805 "vmd_remove_device", 00:09:46.805 "vmd_enable", 00:09:46.805 "sock_get_default_impl", 00:09:46.805 "sock_set_default_impl", 00:09:46.805 "sock_impl_set_options", 00:09:46.805 "sock_impl_get_options", 00:09:46.805 "iobuf_get_stats", 00:09:46.805 "iobuf_set_options", 00:09:46.805 "keyring_get_keys", 00:09:46.805 "framework_get_pci_devices", 00:09:46.805 "framework_get_config", 00:09:46.805 "framework_get_subsystems", 00:09:46.805 "fsdev_set_opts", 00:09:46.805 "fsdev_get_opts", 00:09:46.805 "trace_get_info", 00:09:46.805 "trace_get_tpoint_group_mask", 00:09:46.805 "trace_disable_tpoint_group", 00:09:46.805 "trace_enable_tpoint_group", 00:09:46.805 "trace_clear_tpoint_mask", 00:09:46.805 "trace_set_tpoint_mask", 00:09:46.805 "notify_get_notifications", 00:09:46.805 "notify_get_types", 00:09:46.805 "spdk_get_version", 00:09:46.805 "rpc_get_methods" 00:09:46.805 ] 00:09:46.805 13:35:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:46.805 13:35:40 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.805 13:35:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.805 13:35:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:46.805 13:35:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58953 00:09:46.805 13:35:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58953 ']' 00:09:46.805 13:35:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58953 00:09:46.805 13:35:40 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:09:46.805 13:35:40 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:46.805 13:35:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58953 00:09:47.064 killing process with pid 58953 00:09:47.064 13:35:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:47.064 13:35:40 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:47.064 13:35:40 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58953' 00:09:47.064 13:35:40 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58953 00:09:47.064 13:35:40 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58953 00:09:50.353 ************************************ 00:09:50.353 END TEST spdkcli_tcp 00:09:50.353 ************************************ 00:09:50.353 00:09:50.353 real 0m5.032s 00:09:50.353 user 0m9.219s 00:09:50.353 sys 0m0.709s 00:09:50.353 13:35:43 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:50.353 13:35:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.353 13:35:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:50.353 13:35:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:50.353 13:35:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:50.353 13:35:43 -- common/autotest_common.sh@10 -- # set +x 00:09:50.353 ************************************ 00:09:50.353 START TEST dpdk_mem_utility 00:09:50.353 ************************************ 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:50.354 * Looking for test storage... 00:09:50.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.354 13:35:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.354 --rc genhtml_branch_coverage=1 00:09:50.354 --rc genhtml_function_coverage=1 00:09:50.354 --rc genhtml_legend=1 00:09:50.354 --rc geninfo_all_blocks=1 00:09:50.354 --rc geninfo_unexecuted_blocks=1 00:09:50.354 00:09:50.354 ' 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.354 --rc genhtml_branch_coverage=1 00:09:50.354 --rc genhtml_function_coverage=1 00:09:50.354 --rc genhtml_legend=1 00:09:50.354 --rc geninfo_all_blocks=1 00:09:50.354 --rc geninfo_unexecuted_blocks=1 00:09:50.354 00:09:50.354 ' 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.354 --rc genhtml_branch_coverage=1 00:09:50.354 --rc genhtml_function_coverage=1 00:09:50.354 --rc genhtml_legend=1 00:09:50.354 --rc geninfo_all_blocks=1 00:09:50.354 --rc geninfo_unexecuted_blocks=1 00:09:50.354 00:09:50.354 ' 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.354 --rc genhtml_branch_coverage=1 00:09:50.354 --rc genhtml_function_coverage=1 00:09:50.354 --rc genhtml_legend=1 00:09:50.354 --rc geninfo_all_blocks=1 00:09:50.354 --rc geninfo_unexecuted_blocks=1 00:09:50.354 00:09:50.354 ' 00:09:50.354 13:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:50.354 13:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59085 00:09:50.354 13:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:50.354 13:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59085 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 59085 ']' 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:50.354 13:35:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:50.354 [2024-11-06 13:35:44.062410] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:50.354 [2024-11-06 13:35:44.062813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59085 ] 00:09:50.354 [2024-11-06 13:35:44.264988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.616 [2024-11-06 13:35:44.433771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.551 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:51.551 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:09:51.551 13:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:51.551 13:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:51.551 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.551 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:51.551 { 00:09:51.551 "filename": "/tmp/spdk_mem_dump.txt" 00:09:51.551 } 00:09:51.551 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.551 13:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:51.811 DPDK memory size 824.000000 MiB in 1 heap(s) 00:09:51.811 1 heaps totaling size 824.000000 MiB 00:09:51.811 size: 824.000000 MiB heap id: 0 00:09:51.811 end heaps---------- 00:09:51.811 9 mempools totaling size 603.782043 MiB 00:09:51.811 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:51.811 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:51.811 size: 100.555481 MiB name: bdev_io_59085 00:09:51.811 size: 50.003479 MiB name: msgpool_59085 00:09:51.812 size: 36.509338 MiB name: fsdev_io_59085 00:09:51.812 size: 21.763794 MiB name: PDU_Pool 00:09:51.812 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:51.812 size: 4.133484 MiB name: evtpool_59085 00:09:51.812 size: 0.026123 MiB name: Session_Pool 00:09:51.812 end mempools------- 00:09:51.812 6 memzones totaling size 4.142822 MiB 00:09:51.812 size: 1.000366 MiB name: RG_ring_0_59085 00:09:51.812 size: 1.000366 MiB name: RG_ring_1_59085 00:09:51.812 size: 1.000366 MiB name: RG_ring_4_59085 00:09:51.812 size: 1.000366 MiB name: RG_ring_5_59085 00:09:51.812 size: 0.125366 MiB name: RG_ring_2_59085 00:09:51.812 size: 0.015991 MiB name: RG_ring_3_59085 00:09:51.812 end memzones------- 00:09:51.812 13:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:51.812 heap id: 0 total size: 824.000000 MiB number of busy elements: 317 number of free elements: 18 00:09:51.812 list of free elements. size: 16.780884 MiB 00:09:51.812 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:51.812 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:51.812 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:51.812 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:51.812 element at address: 0x200019900040 with size: 0.999939 MiB 00:09:51.812 element at address: 0x200019a00000 with size: 0.999084 MiB 00:09:51.812 element at address: 0x200032600000 with size: 0.994324 MiB 00:09:51.812 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:51.812 element at address: 0x200019200000 with size: 0.959656 MiB 00:09:51.812 element at address: 0x200019d00040 with size: 0.936401 MiB 00:09:51.812 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:51.812 element at address: 0x20001b400000 with size: 0.562195 MiB 00:09:51.812 element at address: 0x200000c00000 with size: 0.489197 MiB 00:09:51.812 element at address: 0x200019600000 with size: 0.487976 MiB 00:09:51.812 element at address: 0x200019e00000 with size: 0.485413 MiB 00:09:51.812 element at address: 0x200012c00000 with size: 0.433472 MiB 00:09:51.812 element at address: 0x200028800000 with size: 0.390442 MiB 00:09:51.812 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:51.812 list of standard malloc elements. size: 199.288208 MiB 00:09:51.812 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:51.812 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:51.812 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:51.812 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:51.812 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:09:51.812 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:51.812 element at address: 0x200019deff40 with size: 0.062683 MiB 00:09:51.812 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:51.812 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:51.812 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:09:51.812 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:51.812 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:51.812 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:51.812 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200019affc40 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200028863f40 with size: 0.000244 MiB 00:09:51.813 element at address: 0x200028864040 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886af80 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b080 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b180 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b280 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b380 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b480 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b580 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b680 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b780 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b880 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886b980 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:09:51.813 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886be80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c080 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c180 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c280 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c380 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c480 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c580 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c680 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c780 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c880 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886c980 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d080 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d180 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d280 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d380 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d480 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d580 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d680 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d780 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d880 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886d980 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886da80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886db80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886de80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886df80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e080 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e180 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e280 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e380 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e480 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e580 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e680 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e780 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e880 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886e980 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f080 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f180 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f280 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f380 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f480 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f580 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f680 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f780 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f880 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886f980 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:09:51.814 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:09:51.814 list of memzone associated elements. size: 607.930908 MiB 00:09:51.814 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:09:51.814 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:51.814 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:09:51.814 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:51.814 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:09:51.814 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59085_0 00:09:51.814 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:51.814 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59085_0 00:09:51.814 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:51.814 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59085_0 00:09:51.814 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:09:51.814 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:51.814 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:09:51.814 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:51.814 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:51.814 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59085_0 00:09:51.814 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:51.814 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59085 00:09:51.814 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:51.814 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59085 00:09:51.814 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:09:51.814 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:51.814 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:09:51.814 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:51.814 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:51.814 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:51.814 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:09:51.814 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:51.814 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:51.814 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59085 00:09:51.814 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:51.814 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59085 00:09:51.814 element at address: 0x200019affd40 with size: 1.000549 MiB 00:09:51.814 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59085 00:09:51.814 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:09:51.814 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59085 00:09:51.814 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:51.814 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59085 00:09:51.814 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:51.814 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59085 00:09:51.814 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:09:51.814 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:51.814 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:09:51.814 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:51.814 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:09:51.814 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:51.814 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:51.814 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59085 00:09:51.814 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:51.814 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59085 00:09:51.814 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:09:51.814 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:51.814 element at address: 0x200028864140 with size: 0.023804 MiB 00:09:51.814 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:51.814 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:51.814 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59085 00:09:51.814 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:09:51.814 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:51.814 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:51.814 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59085 00:09:51.814 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:51.814 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59085 00:09:51.814 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:51.814 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59085 00:09:51.814 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:09:51.814 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:51.814 13:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:51.814 13:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59085 00:09:51.814 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 59085 ']' 00:09:51.814 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 59085 00:09:51.814 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:09:51.814 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.814 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59085 00:09:51.814 killing process with pid 59085 00:09:51.814 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:51.815 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:51.815 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59085' 00:09:51.815 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 59085 00:09:51.815 13:35:45 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 59085 00:09:55.140 00:09:55.140 real 0m4.876s 00:09:55.140 user 0m4.951s 00:09:55.140 sys 0m0.649s 00:09:55.140 ************************************ 00:09:55.140 END TEST dpdk_mem_utility 00:09:55.140 ************************************ 00:09:55.140 13:35:48 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.140 13:35:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:55.140 13:35:48 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:55.140 13:35:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:55.140 13:35:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.140 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:09:55.140 ************************************ 00:09:55.140 START TEST event 00:09:55.140 ************************************ 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:55.140 * Looking for test storage... 00:09:55.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.140 13:35:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.140 13:35:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.140 13:35:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.140 13:35:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.140 13:35:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.140 13:35:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.140 13:35:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.140 13:35:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.140 13:35:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.140 13:35:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.140 13:35:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.140 13:35:48 event -- scripts/common.sh@344 -- # case "$op" in 00:09:55.140 13:35:48 event -- scripts/common.sh@345 -- # : 1 00:09:55.140 13:35:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.140 13:35:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.140 13:35:48 event -- scripts/common.sh@365 -- # decimal 1 00:09:55.140 13:35:48 event -- scripts/common.sh@353 -- # local d=1 00:09:55.140 13:35:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.140 13:35:48 event -- scripts/common.sh@355 -- # echo 1 00:09:55.140 13:35:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.140 13:35:48 event -- scripts/common.sh@366 -- # decimal 2 00:09:55.140 13:35:48 event -- scripts/common.sh@353 -- # local d=2 00:09:55.140 13:35:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.140 13:35:48 event -- scripts/common.sh@355 -- # echo 2 00:09:55.140 13:35:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.140 13:35:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.140 13:35:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.140 13:35:48 event -- scripts/common.sh@368 -- # return 0 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.140 --rc genhtml_branch_coverage=1 00:09:55.140 --rc genhtml_function_coverage=1 00:09:55.140 --rc genhtml_legend=1 00:09:55.140 --rc geninfo_all_blocks=1 00:09:55.140 --rc geninfo_unexecuted_blocks=1 00:09:55.140 00:09:55.140 ' 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.140 --rc genhtml_branch_coverage=1 00:09:55.140 --rc genhtml_function_coverage=1 00:09:55.140 --rc genhtml_legend=1 00:09:55.140 --rc geninfo_all_blocks=1 00:09:55.140 --rc geninfo_unexecuted_blocks=1 00:09:55.140 00:09:55.140 ' 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.140 --rc genhtml_branch_coverage=1 00:09:55.140 --rc genhtml_function_coverage=1 00:09:55.140 --rc genhtml_legend=1 00:09:55.140 --rc geninfo_all_blocks=1 00:09:55.140 --rc geninfo_unexecuted_blocks=1 00:09:55.140 00:09:55.140 ' 00:09:55.140 13:35:48 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.140 --rc genhtml_branch_coverage=1 00:09:55.141 --rc genhtml_function_coverage=1 00:09:55.141 --rc genhtml_legend=1 00:09:55.141 --rc geninfo_all_blocks=1 00:09:55.141 --rc geninfo_unexecuted_blocks=1 00:09:55.141 00:09:55.141 ' 00:09:55.141 13:35:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:55.141 13:35:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:55.141 13:35:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:55.141 13:35:48 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:55.141 13:35:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.141 13:35:48 event -- common/autotest_common.sh@10 -- # set +x 00:09:55.141 ************************************ 00:09:55.141 START TEST event_perf 00:09:55.141 ************************************ 00:09:55.141 13:35:48 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:55.141 Running I/O for 1 seconds...[2024-11-06 13:35:48.903782] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:55.141 [2024-11-06 13:35:48.904160] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59200 ] 00:09:55.141 [2024-11-06 13:35:49.109257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.398 [2024-11-06 13:35:49.292597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.398 [2024-11-06 13:35:49.292689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.398 [2024-11-06 13:35:49.292800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.398 Running I/O for 1 seconds...[2024-11-06 13:35:49.292816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.770 00:09:56.770 lcore 0: 165946 00:09:56.770 lcore 1: 165946 00:09:56.770 lcore 2: 165947 00:09:56.770 lcore 3: 165946 00:09:56.770 done. 00:09:56.770 ************************************ 00:09:56.770 END TEST event_perf 00:09:56.770 ************************************ 00:09:56.770 00:09:56.770 real 0m1.723s 00:09:56.770 user 0m4.433s 00:09:56.770 sys 0m0.162s 00:09:56.770 13:35:50 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.770 13:35:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:56.770 13:35:50 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:56.770 13:35:50 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:56.770 13:35:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.770 13:35:50 event -- common/autotest_common.sh@10 -- # set +x 00:09:56.770 ************************************ 00:09:56.770 START TEST event_reactor 00:09:56.770 ************************************ 00:09:56.770 13:35:50 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:56.770 [2024-11-06 13:35:50.680463] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:56.770 [2024-11-06 13:35:50.680676] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59239 ] 00:09:57.028 [2024-11-06 13:35:50.862421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.028 [2024-11-06 13:35:50.992712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.405 test_start 00:09:58.405 oneshot 00:09:58.405 tick 100 00:09:58.405 tick 100 00:09:58.405 tick 250 00:09:58.405 tick 100 00:09:58.405 tick 100 00:09:58.405 tick 100 00:09:58.405 tick 250 00:09:58.405 tick 500 00:09:58.405 tick 100 00:09:58.405 tick 100 00:09:58.405 tick 250 00:09:58.405 tick 100 00:09:58.405 tick 100 00:09:58.405 test_end 00:09:58.405 00:09:58.405 real 0m1.615s 00:09:58.405 user 0m1.394s 00:09:58.405 sys 0m0.110s 00:09:58.405 ************************************ 00:09:58.405 END TEST event_reactor 00:09:58.405 ************************************ 00:09:58.405 13:35:52 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.405 13:35:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:58.405 13:35:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:58.405 13:35:52 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:58.405 13:35:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.405 13:35:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:58.405 ************************************ 00:09:58.405 START TEST event_reactor_perf 00:09:58.405 ************************************ 00:09:58.405 13:35:52 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:58.405 [2024-11-06 13:35:52.367428] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:09:58.405 [2024-11-06 13:35:52.367889] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59279 ] 00:09:58.686 [2024-11-06 13:35:52.567970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.943 [2024-11-06 13:35:52.739772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.319 test_start 00:10:00.319 test_end 00:10:00.319 Performance: 295182 events per second 00:10:00.319 ************************************ 00:10:00.319 END TEST event_reactor_perf 00:10:00.319 ************************************ 00:10:00.319 00:10:00.319 real 0m1.677s 00:10:00.319 user 0m1.432s 00:10:00.319 sys 0m0.133s 00:10:00.319 13:35:53 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.319 13:35:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:00.319 13:35:54 event -- event/event.sh@49 -- # uname -s 00:10:00.319 13:35:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:00.319 13:35:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:00.319 13:35:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:00.319 13:35:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.319 13:35:54 event -- common/autotest_common.sh@10 -- # set +x 00:10:00.319 ************************************ 00:10:00.319 START TEST event_scheduler 00:10:00.319 ************************************ 00:10:00.319 13:35:54 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:00.319 * Looking for test storage... 00:10:00.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:00.319 13:35:54 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:00.319 13:35:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:10:00.319 13:35:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:00.319 13:35:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.319 13:35:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:00.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.320 13:35:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.320 --rc genhtml_branch_coverage=1 00:10:00.320 --rc genhtml_function_coverage=1 00:10:00.320 --rc genhtml_legend=1 00:10:00.320 --rc geninfo_all_blocks=1 00:10:00.320 --rc geninfo_unexecuted_blocks=1 00:10:00.320 00:10:00.320 ' 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.320 --rc genhtml_branch_coverage=1 00:10:00.320 --rc genhtml_function_coverage=1 00:10:00.320 --rc genhtml_legend=1 00:10:00.320 --rc geninfo_all_blocks=1 00:10:00.320 --rc geninfo_unexecuted_blocks=1 00:10:00.320 00:10:00.320 ' 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.320 --rc genhtml_branch_coverage=1 00:10:00.320 --rc genhtml_function_coverage=1 00:10:00.320 --rc genhtml_legend=1 00:10:00.320 --rc geninfo_all_blocks=1 00:10:00.320 --rc geninfo_unexecuted_blocks=1 00:10:00.320 00:10:00.320 ' 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.320 --rc genhtml_branch_coverage=1 00:10:00.320 --rc genhtml_function_coverage=1 00:10:00.320 --rc genhtml_legend=1 00:10:00.320 --rc geninfo_all_blocks=1 00:10:00.320 --rc geninfo_unexecuted_blocks=1 00:10:00.320 00:10:00.320 ' 00:10:00.320 13:35:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:00.320 13:35:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59352 00:10:00.320 13:35:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.320 13:35:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59352 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59352 ']' 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:00.320 13:35:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:00.320 13:35:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:00.578 [2024-11-06 13:35:54.361826] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:00.578 [2024-11-06 13:35:54.362215] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59352 ] 00:10:00.578 [2024-11-06 13:35:54.551211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.837 [2024-11-06 13:35:54.734177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.837 [2024-11-06 13:35:54.734283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.837 [2024-11-06 13:35:54.734438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.837 [2024-11-06 13:35:54.734510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.771 13:35:55 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:01.771 13:35:55 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:10:01.771 13:35:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:01.771 13:35:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.771 13:35:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:01.771 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.771 POWER: Cannot set governor of lcore 0 to userspace 00:10:01.771 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.771 POWER: Cannot set governor of lcore 0 to performance 00:10:01.771 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.771 POWER: Cannot set governor of lcore 0 to userspace 00:10:01.771 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.771 POWER: Cannot set governor of lcore 0 to userspace 00:10:01.771 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:01.771 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:01.771 POWER: Unable to set Power Management Environment for lcore 0 00:10:01.771 [2024-11-06 13:35:55.397770] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:01.771 [2024-11-06 13:35:55.398015] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:01.771 [2024-11-06 13:35:55.398086] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:01.771 [2024-11-06 13:35:55.398158] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:01.771 [2024-11-06 13:35:55.398381] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:01.771 [2024-11-06 13:35:55.398405] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:01.771 13:35:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.771 13:35:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:01.771 13:35:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.771 13:35:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 [2024-11-06 13:35:55.779225] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:02.029 13:35:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:02.029 13:35:55 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:02.029 13:35:55 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 ************************************ 00:10:02.029 START TEST scheduler_create_thread 00:10:02.029 ************************************ 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 2 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 3 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 4 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 5 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 6 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 7 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 8 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 9 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 10 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.029 13:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:03.400 13:35:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.400 13:35:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:03.400 13:35:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:03.400 13:35:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.400 13:35:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.773 ************************************ 00:10:04.773 END TEST scheduler_create_thread 00:10:04.773 ************************************ 00:10:04.773 13:35:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.773 00:10:04.773 real 0m2.619s 00:10:04.773 user 0m0.019s 00:10:04.773 sys 0m0.011s 00:10:04.773 13:35:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.773 13:35:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.773 13:35:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:04.773 13:35:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59352 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59352 ']' 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59352 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59352 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:10:04.773 killing process with pid 59352 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59352' 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59352 00:10:04.773 13:35:58 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59352 00:10:05.031 [2024-11-06 13:35:58.891378] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:06.404 00:10:06.404 real 0m6.178s 00:10:06.404 user 0m10.825s 00:10:06.404 sys 0m0.562s 00:10:06.404 13:36:00 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.404 ************************************ 00:10:06.404 END TEST event_scheduler 00:10:06.404 ************************************ 00:10:06.404 13:36:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:06.404 13:36:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:06.404 13:36:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:06.404 13:36:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:06.404 13:36:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.404 13:36:00 event -- common/autotest_common.sh@10 -- # set +x 00:10:06.404 ************************************ 00:10:06.404 START TEST app_repeat 00:10:06.404 ************************************ 00:10:06.404 13:36:00 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59469 00:10:06.404 Process app_repeat pid: 59469 00:10:06.404 spdk_app_start Round 0 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59469' 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:06.404 13:36:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59469 /var/tmp/spdk-nbd.sock 00:10:06.404 13:36:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59469 ']' 00:10:06.404 13:36:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:06.404 13:36:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:06.404 13:36:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:06.404 13:36:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.404 13:36:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:06.404 [2024-11-06 13:36:00.380209] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:06.404 [2024-11-06 13:36:00.380380] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59469 ] 00:10:06.662 [2024-11-06 13:36:00.578978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.920 [2024-11-06 13:36:00.724168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.920 [2024-11-06 13:36:00.724194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.489 13:36:01 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:07.489 13:36:01 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:07.489 13:36:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:07.748 Malloc0 00:10:07.748 13:36:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:08.315 Malloc1 00:10:08.315 13:36:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:08.315 13:36:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:08.572 /dev/nbd0 00:10:08.572 13:36:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:08.572 13:36:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:08.572 1+0 records in 00:10:08.572 1+0 records out 00:10:08.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495815 s, 8.3 MB/s 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:08.572 13:36:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:08.572 13:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.572 13:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:08.572 13:36:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:08.830 /dev/nbd1 00:10:08.830 13:36:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:08.830 13:36:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:08.830 1+0 records in 00:10:08.830 1+0 records out 00:10:08.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377511 s, 10.9 MB/s 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:08.830 13:36:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:08.830 13:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.830 13:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:08.831 13:36:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:08.831 13:36:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.831 13:36:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:09.397 { 00:10:09.397 "nbd_device": "/dev/nbd0", 00:10:09.397 "bdev_name": "Malloc0" 00:10:09.397 }, 00:10:09.397 { 00:10:09.397 "nbd_device": "/dev/nbd1", 00:10:09.397 "bdev_name": "Malloc1" 00:10:09.397 } 00:10:09.397 ]' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:09.397 { 00:10:09.397 "nbd_device": "/dev/nbd0", 00:10:09.397 "bdev_name": "Malloc0" 00:10:09.397 }, 00:10:09.397 { 00:10:09.397 "nbd_device": "/dev/nbd1", 00:10:09.397 "bdev_name": "Malloc1" 00:10:09.397 } 00:10:09.397 ]' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:09.397 /dev/nbd1' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:09.397 /dev/nbd1' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:09.397 256+0 records in 00:10:09.397 256+0 records out 00:10:09.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656223 s, 160 MB/s 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:09.397 256+0 records in 00:10:09.397 256+0 records out 00:10:09.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333867 s, 31.4 MB/s 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:09.397 256+0 records in 00:10:09.397 256+0 records out 00:10:09.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278156 s, 37.7 MB/s 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:09.397 13:36:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:09.398 13:36:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:09.656 13:36:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.915 13:36:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:10.481 13:36:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:10.481 13:36:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:11.049 13:36:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:12.423 [2024-11-06 13:36:06.311902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:12.681 [2024-11-06 13:36:06.449876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.681 [2024-11-06 13:36:06.449886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.939 [2024-11-06 13:36:06.678992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:12.939 [2024-11-06 13:36:06.679123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:14.360 spdk_app_start Round 1 00:10:14.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:14.360 13:36:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:14.360 13:36:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:14.360 13:36:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59469 /var/tmp/spdk-nbd.sock 00:10:14.360 13:36:07 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59469 ']' 00:10:14.360 13:36:07 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:14.360 13:36:07 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.360 13:36:07 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:14.360 13:36:07 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.360 13:36:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:14.360 13:36:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.360 13:36:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:14.360 13:36:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:14.617 Malloc0 00:10:14.617 13:36:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:14.874 Malloc1 00:10:14.874 13:36:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:14.874 13:36:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:15.438 /dev/nbd0 00:10:15.438 13:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:15.438 13:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:15.438 1+0 records in 00:10:15.438 1+0 records out 00:10:15.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461187 s, 8.9 MB/s 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:15.438 13:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:15.438 13:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:15.438 13:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:15.438 13:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:15.695 /dev/nbd1 00:10:15.695 13:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:15.695 13:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:15.695 1+0 records in 00:10:15.695 1+0 records out 00:10:15.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350426 s, 11.7 MB/s 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:15.695 13:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:15.695 13:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:15.695 13:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:15.695 13:36:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:15.695 13:36:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:15.695 13:36:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:15.952 { 00:10:15.952 "nbd_device": "/dev/nbd0", 00:10:15.952 "bdev_name": "Malloc0" 00:10:15.952 }, 00:10:15.952 { 00:10:15.952 "nbd_device": "/dev/nbd1", 00:10:15.952 "bdev_name": "Malloc1" 00:10:15.952 } 00:10:15.952 ]' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:15.952 { 00:10:15.952 "nbd_device": "/dev/nbd0", 00:10:15.952 "bdev_name": "Malloc0" 00:10:15.952 }, 00:10:15.952 { 00:10:15.952 "nbd_device": "/dev/nbd1", 00:10:15.952 "bdev_name": "Malloc1" 00:10:15.952 } 00:10:15.952 ]' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:15.952 /dev/nbd1' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:15.952 /dev/nbd1' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:15.952 256+0 records in 00:10:15.952 256+0 records out 00:10:15.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00730557 s, 144 MB/s 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:15.952 256+0 records in 00:10:15.952 256+0 records out 00:10:15.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276258 s, 38.0 MB/s 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:15.952 256+0 records in 00:10:15.952 256+0 records out 00:10:15.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279856 s, 37.5 MB/s 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:15.952 13:36:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:16.208 13:36:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:16.208 13:36:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.208 13:36:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:16.208 13:36:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:16.208 13:36:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:16.208 13:36:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:16.208 13:36:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:16.465 13:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:16.465 13:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:16.465 13:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:16.466 13:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:16.466 13:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:16.466 13:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:16.466 13:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:16.466 13:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:16.466 13:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:16.466 13:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.724 13:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:16.981 13:36:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:16.981 13:36:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:17.547 13:36:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:18.920 [2024-11-06 13:36:12.729870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:18.920 [2024-11-06 13:36:12.866055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.920 [2024-11-06 13:36:12.866067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.178 [2024-11-06 13:36:13.104474] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:19.178 [2024-11-06 13:36:13.104579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:20.572 spdk_app_start Round 2 00:10:20.572 13:36:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:20.572 13:36:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:20.572 13:36:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59469 /var/tmp/spdk-nbd.sock 00:10:20.572 13:36:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59469 ']' 00:10:20.572 13:36:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:20.572 13:36:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.572 13:36:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:20.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:20.572 13:36:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.572 13:36:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:20.830 13:36:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.830 13:36:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:20.830 13:36:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:21.088 Malloc0 00:10:21.088 13:36:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:21.347 Malloc1 00:10:21.347 13:36:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.347 13:36:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:21.605 /dev/nbd0 00:10:21.605 13:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:21.605 13:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:21.605 1+0 records in 00:10:21.605 1+0 records out 00:10:21.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320588 s, 12.8 MB/s 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:21.605 13:36:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:21.605 13:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.605 13:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.605 13:36:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:22.172 /dev/nbd1 00:10:22.172 13:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:22.172 13:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:22.172 1+0 records in 00:10:22.172 1+0 records out 00:10:22.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359 s, 11.4 MB/s 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:22.172 13:36:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:22.172 13:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:22.172 13:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:22.172 13:36:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:22.172 13:36:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.172 13:36:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:22.430 { 00:10:22.430 "nbd_device": "/dev/nbd0", 00:10:22.430 "bdev_name": "Malloc0" 00:10:22.430 }, 00:10:22.430 { 00:10:22.430 "nbd_device": "/dev/nbd1", 00:10:22.430 "bdev_name": "Malloc1" 00:10:22.430 } 00:10:22.430 ]' 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:22.430 { 00:10:22.430 "nbd_device": "/dev/nbd0", 00:10:22.430 "bdev_name": "Malloc0" 00:10:22.430 }, 00:10:22.430 { 00:10:22.430 "nbd_device": "/dev/nbd1", 00:10:22.430 "bdev_name": "Malloc1" 00:10:22.430 } 00:10:22.430 ]' 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:22.430 /dev/nbd1' 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:22.430 /dev/nbd1' 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:22.430 256+0 records in 00:10:22.430 256+0 records out 00:10:22.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00736036 s, 142 MB/s 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:22.430 256+0 records in 00:10:22.430 256+0 records out 00:10:22.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342793 s, 30.6 MB/s 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:22.430 256+0 records in 00:10:22.430 256+0 records out 00:10:22.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278219 s, 37.7 MB/s 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:22.430 13:36:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.431 13:36:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:22.699 13:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.959 13:36:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.219 13:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:23.478 13:36:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:23.478 13:36:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:24.043 13:36:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:25.419 [2024-11-06 13:36:19.266081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:25.678 [2024-11-06 13:36:19.403679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.678 [2024-11-06 13:36:19.403689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.678 [2024-11-06 13:36:19.639325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:25.678 [2024-11-06 13:36:19.639419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:27.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:27.048 13:36:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59469 /var/tmp/spdk-nbd.sock 00:10:27.048 13:36:20 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59469 ']' 00:10:27.048 13:36:20 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:27.048 13:36:20 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.048 13:36:20 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:27.048 13:36:20 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.048 13:36:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:27.306 13:36:21 event.app_repeat -- event/event.sh@39 -- # killprocess 59469 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59469 ']' 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59469 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59469 00:10:27.306 killing process with pid 59469 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59469' 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59469 00:10:27.306 13:36:21 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59469 00:10:28.680 spdk_app_start is called in Round 0. 00:10:28.680 Shutdown signal received, stop current app iteration 00:10:28.680 Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 reinitialization... 00:10:28.680 spdk_app_start is called in Round 1. 00:10:28.680 Shutdown signal received, stop current app iteration 00:10:28.680 Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 reinitialization... 00:10:28.680 spdk_app_start is called in Round 2. 00:10:28.680 Shutdown signal received, stop current app iteration 00:10:28.680 Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 reinitialization... 00:10:28.680 spdk_app_start is called in Round 3. 00:10:28.680 Shutdown signal received, stop current app iteration 00:10:28.680 ************************************ 00:10:28.680 END TEST app_repeat 00:10:28.680 ************************************ 00:10:28.680 13:36:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:28.680 13:36:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:28.680 00:10:28.680 real 0m22.159s 00:10:28.680 user 0m48.158s 00:10:28.680 sys 0m3.647s 00:10:28.680 13:36:22 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.680 13:36:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:28.680 13:36:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:28.680 13:36:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:28.680 13:36:22 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:28.680 13:36:22 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.680 13:36:22 event -- common/autotest_common.sh@10 -- # set +x 00:10:28.680 ************************************ 00:10:28.680 START TEST cpu_locks 00:10:28.680 ************************************ 00:10:28.680 13:36:22 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:28.680 * Looking for test storage... 00:10:28.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:28.680 13:36:22 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:28.680 13:36:22 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:10:28.680 13:36:22 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.938 13:36:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:28.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.938 --rc genhtml_branch_coverage=1 00:10:28.938 --rc genhtml_function_coverage=1 00:10:28.938 --rc genhtml_legend=1 00:10:28.938 --rc geninfo_all_blocks=1 00:10:28.938 --rc geninfo_unexecuted_blocks=1 00:10:28.938 00:10:28.938 ' 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:28.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.938 --rc genhtml_branch_coverage=1 00:10:28.938 --rc genhtml_function_coverage=1 00:10:28.938 --rc genhtml_legend=1 00:10:28.938 --rc geninfo_all_blocks=1 00:10:28.938 --rc geninfo_unexecuted_blocks=1 00:10:28.938 00:10:28.938 ' 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:28.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.938 --rc genhtml_branch_coverage=1 00:10:28.938 --rc genhtml_function_coverage=1 00:10:28.938 --rc genhtml_legend=1 00:10:28.938 --rc geninfo_all_blocks=1 00:10:28.938 --rc geninfo_unexecuted_blocks=1 00:10:28.938 00:10:28.938 ' 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:28.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.938 --rc genhtml_branch_coverage=1 00:10:28.938 --rc genhtml_function_coverage=1 00:10:28.938 --rc genhtml_legend=1 00:10:28.938 --rc geninfo_all_blocks=1 00:10:28.938 --rc geninfo_unexecuted_blocks=1 00:10:28.938 00:10:28.938 ' 00:10:28.938 13:36:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:28.938 13:36:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:28.938 13:36:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:28.938 13:36:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.938 13:36:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:28.938 ************************************ 00:10:28.938 START TEST default_locks 00:10:28.938 ************************************ 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59950 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59950 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59950 ']' 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.938 13:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:28.938 [2024-11-06 13:36:22.802719] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:28.938 [2024-11-06 13:36:22.803095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59950 ] 00:10:29.196 [2024-11-06 13:36:22.983808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.196 [2024-11-06 13:36:23.144613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.593 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.593 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:10:30.593 13:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59950 00:10:30.593 13:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59950 00:10:30.593 13:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59950 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59950 ']' 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59950 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59950 00:10:30.851 killing process with pid 59950 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59950' 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59950 00:10:30.851 13:36:24 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59950 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59950 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59950 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59950 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59950 ']' 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:34.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.139 ERROR: process (pid: 59950) is no longer running 00:10:34.139 ************************************ 00:10:34.139 END TEST default_locks 00:10:34.139 ************************************ 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:34.139 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59950) - No such process 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:34.139 00:10:34.139 real 0m5.045s 00:10:34.139 user 0m5.194s 00:10:34.139 sys 0m0.822s 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.139 13:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:34.139 13:36:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:34.139 13:36:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:34.139 13:36:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:34.139 13:36:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:34.139 ************************************ 00:10:34.139 START TEST default_locks_via_rpc 00:10:34.139 ************************************ 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60037 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60037 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60037 ']' 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:34.139 13:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.139 [2024-11-06 13:36:27.936164] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:34.139 [2024-11-06 13:36:27.936536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:10:34.396 [2024-11-06 13:36:28.132750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.396 [2024-11-06 13:36:28.301907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60037 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60037 00:10:35.770 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60037 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 60037 ']' 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 60037 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60037 00:10:36.030 killing process with pid 60037 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60037' 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 60037 00:10:36.030 13:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 60037 00:10:39.311 ************************************ 00:10:39.311 END TEST default_locks_via_rpc 00:10:39.311 ************************************ 00:10:39.311 00:10:39.311 real 0m5.023s 00:10:39.311 user 0m5.035s 00:10:39.311 sys 0m0.757s 00:10:39.311 13:36:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:39.311 13:36:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.311 13:36:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:39.311 13:36:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:39.311 13:36:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:39.311 13:36:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.311 ************************************ 00:10:39.311 START TEST non_locking_app_on_locked_coremask 00:10:39.311 ************************************ 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60122 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:39.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60122 /var/tmp/spdk.sock 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60122 ']' 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.311 13:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:39.311 [2024-11-06 13:36:32.969394] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:39.311 [2024-11-06 13:36:32.969779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:10:39.311 [2024-11-06 13:36:33.150949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.311 [2024-11-06 13:36:33.286803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60143 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60143 /var/tmp/spdk2.sock 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60143 ']' 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.688 13:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:40.688 [2024-11-06 13:36:34.432196] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:40.688 [2024-11-06 13:36:34.432384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60143 ] 00:10:40.688 [2024-11-06 13:36:34.634969] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:40.688 [2024-11-06 13:36:34.635055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.946 [2024-11-06 13:36:34.910219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.480 13:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:43.480 13:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:43.480 13:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60122 00:10:43.480 13:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60122 00:10:43.480 13:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60122 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60122 ']' 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60122 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60122 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:44.417 killing process with pid 60122 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60122' 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60122 00:10:44.417 13:36:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60122 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60143 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60143 ']' 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60143 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60143 00:10:50.985 killing process with pid 60143 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60143' 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60143 00:10:50.985 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60143 00:10:52.887 00:10:52.887 real 0m13.946s 00:10:52.887 user 0m14.654s 00:10:52.887 sys 0m1.636s 00:10:52.887 13:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.887 ************************************ 00:10:52.887 END TEST non_locking_app_on_locked_coremask 00:10:52.887 ************************************ 00:10:52.887 13:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:52.887 13:36:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:52.887 13:36:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:52.887 13:36:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.887 13:36:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:52.887 ************************************ 00:10:52.887 START TEST locking_app_on_unlocked_coremask 00:10:52.887 ************************************ 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60312 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60312 /var/tmp/spdk.sock 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60312 ']' 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.887 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.145 [2024-11-06 13:36:46.992807] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:53.145 [2024-11-06 13:36:46.992979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60312 ] 00:10:53.403 [2024-11-06 13:36:47.191441] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:53.403 [2024-11-06 13:36:47.191524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.403 [2024-11-06 13:36:47.355376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60335 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60335 /var/tmp/spdk2.sock 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60335 ']' 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:54.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:54.778 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.778 [2024-11-06 13:36:48.502433] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:10:54.778 [2024-11-06 13:36:48.502681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60335 ] 00:10:54.778 [2024-11-06 13:36:48.757722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.345 [2024-11-06 13:36:49.128275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.879 13:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:57.879 13:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:57.879 13:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60335 00:10:57.879 13:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60335 00:10:57.879 13:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:58.470 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60312 00:10:58.470 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60312 ']' 00:10:58.470 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60312 00:10:58.470 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:58.470 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:58.470 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60312 00:10:58.470 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:58.728 killing process with pid 60312 00:10:58.728 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:58.728 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60312' 00:10:58.728 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60312 00:10:58.728 13:36:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60312 00:11:05.293 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60335 00:11:05.293 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60335 ']' 00:11:05.293 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60335 00:11:05.293 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:05.293 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:05.293 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60335 00:11:05.294 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:05.294 killing process with pid 60335 00:11:05.294 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:05.294 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60335' 00:11:05.294 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60335 00:11:05.294 13:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60335 00:11:07.195 00:11:07.195 real 0m14.098s 00:11:07.195 user 0m14.443s 00:11:07.195 sys 0m1.893s 00:11:07.195 13:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.195 13:37:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:07.195 ************************************ 00:11:07.195 END TEST locking_app_on_unlocked_coremask 00:11:07.195 ************************************ 00:11:07.195 13:37:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:07.195 13:37:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:07.195 13:37:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.195 13:37:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:07.195 ************************************ 00:11:07.195 START TEST locking_app_on_locked_coremask 00:11:07.195 ************************************ 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60505 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60505 /var/tmp/spdk.sock 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60505 ']' 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.195 13:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:07.195 [2024-11-06 13:37:01.149573] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:07.195 [2024-11-06 13:37:01.149759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:11:07.453 [2024-11-06 13:37:01.358673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.711 [2024-11-06 13:37:01.544957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60527 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60527 /var/tmp/spdk2.sock 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60527 /var/tmp/spdk2.sock 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60527 /var/tmp/spdk2.sock 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60527 ']' 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:08.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.745 13:37:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:08.745 [2024-11-06 13:37:02.666711] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:08.745 [2024-11-06 13:37:02.666863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:11:09.004 [2024-11-06 13:37:02.868552] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60505 has claimed it. 00:11:09.004 [2024-11-06 13:37:02.868659] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:09.570 ERROR: process (pid: 60527) is no longer running 00:11:09.570 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60527) - No such process 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60505 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:09.570 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60505 00:11:10.136 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60505 00:11:10.136 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60505 ']' 00:11:10.136 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60505 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60505 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:10.137 killing process with pid 60505 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60505' 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60505 00:11:10.137 13:37:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60505 00:11:13.419 00:11:13.419 real 0m6.136s 00:11:13.419 user 0m6.408s 00:11:13.419 sys 0m0.990s 00:11:13.419 ************************************ 00:11:13.419 END TEST locking_app_on_locked_coremask 00:11:13.419 ************************************ 00:11:13.419 13:37:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.419 13:37:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:13.419 13:37:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:13.419 13:37:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:13.419 13:37:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.419 13:37:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:13.419 ************************************ 00:11:13.419 START TEST locking_overlapped_coremask 00:11:13.419 ************************************ 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60607 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60607 /var/tmp/spdk.sock 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60607 ']' 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.419 13:37:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:13.419 [2024-11-06 13:37:07.323643] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:13.419 [2024-11-06 13:37:07.323794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:11:13.677 [2024-11-06 13:37:07.508640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:13.936 [2024-11-06 13:37:07.704083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.936 [2024-11-06 13:37:07.704196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.936 [2024-11-06 13:37:07.704230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60631 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60631 /var/tmp/spdk2.sock 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60631 /var/tmp/spdk2.sock 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60631 /var/tmp/spdk2.sock 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60631 ']' 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:15.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:15.311 13:37:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.311 [2024-11-06 13:37:09.037048] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:15.311 [2024-11-06 13:37:09.037219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60631 ] 00:11:15.311 [2024-11-06 13:37:09.248941] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60607 has claimed it. 00:11:15.311 [2024-11-06 13:37:09.249041] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:15.875 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60631) - No such process 00:11:15.875 ERROR: process (pid: 60631) is no longer running 00:11:15.875 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.875 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:15.875 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60607 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60607 ']' 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60607 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60607 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:15.876 killing process with pid 60607 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60607' 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60607 00:11:15.876 13:37:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60607 00:11:19.161 00:11:19.161 real 0m5.621s 00:11:19.161 user 0m15.167s 00:11:19.161 sys 0m0.919s 00:11:19.161 ************************************ 00:11:19.161 END TEST locking_overlapped_coremask 00:11:19.161 ************************************ 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:19.161 13:37:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:19.161 13:37:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:19.161 13:37:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.161 13:37:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:19.161 ************************************ 00:11:19.161 START TEST locking_overlapped_coremask_via_rpc 00:11:19.161 ************************************ 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60706 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60706 /var/tmp/spdk.sock 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60706 ']' 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:19.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:19.161 13:37:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.161 [2024-11-06 13:37:13.019967] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:19.161 [2024-11-06 13:37:13.020134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:11:19.421 [2024-11-06 13:37:13.200257] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:19.421 [2024-11-06 13:37:13.200362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.421 [2024-11-06 13:37:13.358464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.421 [2024-11-06 13:37:13.358568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.421 [2024-11-06 13:37:13.358604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60729 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60729 /var/tmp/spdk2.sock 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60729 ']' 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:20.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:20.796 13:37:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.796 [2024-11-06 13:37:14.556921] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:20.796 [2024-11-06 13:37:14.558374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60729 ] 00:11:21.054 [2024-11-06 13:37:14.782600] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:21.054 [2024-11-06 13:37:14.782705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.313 [2024-11-06 13:37:15.065058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.313 [2024-11-06 13:37:15.065155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.313 [2024-11-06 13:37:15.065107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.930 [2024-11-06 13:37:17.417326] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60706 has claimed it. 00:11:23.930 request: 00:11:23.930 { 00:11:23.930 "method": "framework_enable_cpumask_locks", 00:11:23.930 "req_id": 1 00:11:23.930 } 00:11:23.930 Got JSON-RPC error response 00:11:23.930 response: 00:11:23.930 { 00:11:23.930 "code": -32603, 00:11:23.930 "message": "Failed to claim CPU core: 2" 00:11:23.930 } 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60706 /var/tmp/spdk.sock 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60706 ']' 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60729 /var/tmp/spdk2.sock 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60729 ']' 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:23.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.930 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.188 ************************************ 00:11:24.188 END TEST locking_overlapped_coremask_via_rpc 00:11:24.188 ************************************ 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:24.188 00:11:24.188 real 0m5.129s 00:11:24.188 user 0m1.861s 00:11:24.188 sys 0m0.271s 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:24.188 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.188 13:37:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:24.188 13:37:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60706 ]] 00:11:24.188 13:37:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60706 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60706 ']' 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60706 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60706 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:24.188 killing process with pid 60706 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60706' 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60706 00:11:24.188 13:37:18 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60706 00:11:27.561 13:37:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60729 ]] 00:11:27.561 13:37:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60729 00:11:27.561 13:37:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60729 ']' 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60729 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60729 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:11:27.562 killing process with pid 60729 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60729' 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60729 00:11:27.562 13:37:20 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60729 00:11:30.848 13:37:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:30.848 Process with pid 60706 is not found 00:11:30.848 13:37:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:30.849 13:37:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60706 ]] 00:11:30.849 13:37:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60706 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60706 ']' 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60706 00:11:30.849 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60706) - No such process 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60706 is not found' 00:11:30.849 13:37:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60729 ]] 00:11:30.849 13:37:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60729 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60729 ']' 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60729 00:11:30.849 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60729) - No such process 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60729 is not found' 00:11:30.849 Process with pid 60729 is not found 00:11:30.849 13:37:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:30.849 00:11:30.849 real 1m1.671s 00:11:30.849 user 1m45.356s 00:11:30.849 sys 0m8.628s 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.849 13:37:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:30.849 ************************************ 00:11:30.849 END TEST cpu_locks 00:11:30.849 ************************************ 00:11:30.849 ************************************ 00:11:30.849 END TEST event 00:11:30.849 ************************************ 00:11:30.849 00:11:30.849 real 1m35.581s 00:11:30.849 user 2m51.830s 00:11:30.849 sys 0m13.563s 00:11:30.849 13:37:24 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.849 13:37:24 event -- common/autotest_common.sh@10 -- # set +x 00:11:30.849 13:37:24 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:30.849 13:37:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:30.849 13:37:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.849 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:11:30.849 ************************************ 00:11:30.849 START TEST thread 00:11:30.849 ************************************ 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:30.849 * Looking for test storage... 00:11:30.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:30.849 13:37:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.849 13:37:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.849 13:37:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.849 13:37:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.849 13:37:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.849 13:37:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.849 13:37:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.849 13:37:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.849 13:37:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.849 13:37:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.849 13:37:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.849 13:37:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:30.849 13:37:24 thread -- scripts/common.sh@345 -- # : 1 00:11:30.849 13:37:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.849 13:37:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.849 13:37:24 thread -- scripts/common.sh@365 -- # decimal 1 00:11:30.849 13:37:24 thread -- scripts/common.sh@353 -- # local d=1 00:11:30.849 13:37:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.849 13:37:24 thread -- scripts/common.sh@355 -- # echo 1 00:11:30.849 13:37:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.849 13:37:24 thread -- scripts/common.sh@366 -- # decimal 2 00:11:30.849 13:37:24 thread -- scripts/common.sh@353 -- # local d=2 00:11:30.849 13:37:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.849 13:37:24 thread -- scripts/common.sh@355 -- # echo 2 00:11:30.849 13:37:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.849 13:37:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.849 13:37:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.849 13:37:24 thread -- scripts/common.sh@368 -- # return 0 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.849 --rc genhtml_branch_coverage=1 00:11:30.849 --rc genhtml_function_coverage=1 00:11:30.849 --rc genhtml_legend=1 00:11:30.849 --rc geninfo_all_blocks=1 00:11:30.849 --rc geninfo_unexecuted_blocks=1 00:11:30.849 00:11:30.849 ' 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.849 --rc genhtml_branch_coverage=1 00:11:30.849 --rc genhtml_function_coverage=1 00:11:30.849 --rc genhtml_legend=1 00:11:30.849 --rc geninfo_all_blocks=1 00:11:30.849 --rc geninfo_unexecuted_blocks=1 00:11:30.849 00:11:30.849 ' 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.849 --rc genhtml_branch_coverage=1 00:11:30.849 --rc genhtml_function_coverage=1 00:11:30.849 --rc genhtml_legend=1 00:11:30.849 --rc geninfo_all_blocks=1 00:11:30.849 --rc geninfo_unexecuted_blocks=1 00:11:30.849 00:11:30.849 ' 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.849 --rc genhtml_branch_coverage=1 00:11:30.849 --rc genhtml_function_coverage=1 00:11:30.849 --rc genhtml_legend=1 00:11:30.849 --rc geninfo_all_blocks=1 00:11:30.849 --rc geninfo_unexecuted_blocks=1 00:11:30.849 00:11:30.849 ' 00:11:30.849 13:37:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.849 13:37:24 thread -- common/autotest_common.sh@10 -- # set +x 00:11:30.849 ************************************ 00:11:30.849 START TEST thread_poller_perf 00:11:30.849 ************************************ 00:11:30.849 13:37:24 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:30.849 [2024-11-06 13:37:24.546867] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:30.849 [2024-11-06 13:37:24.548119] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60941 ] 00:11:30.849 [2024-11-06 13:37:24.752795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.107 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:31.107 [2024-11-06 13:37:24.927287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.481 [2024-11-06T13:37:26.464Z] ====================================== 00:11:32.481 [2024-11-06T13:37:26.464Z] busy:2110505252 (cyc) 00:11:32.481 [2024-11-06T13:37:26.464Z] total_run_count: 311000 00:11:32.481 [2024-11-06T13:37:26.464Z] tsc_hz: 2100000000 (cyc) 00:11:32.481 [2024-11-06T13:37:26.464Z] ====================================== 00:11:32.481 [2024-11-06T13:37:26.464Z] poller_cost: 6786 (cyc), 3231 (nsec) 00:11:32.481 00:11:32.481 real 0m1.729s 00:11:32.481 user 0m1.459s 00:11:32.481 sys 0m0.157s 00:11:32.481 13:37:26 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.481 ************************************ 00:11:32.481 END TEST thread_poller_perf 00:11:32.481 13:37:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:32.481 ************************************ 00:11:32.481 13:37:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:32.481 13:37:26 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:32.481 13:37:26 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.481 13:37:26 thread -- common/autotest_common.sh@10 -- # set +x 00:11:32.481 ************************************ 00:11:32.481 START TEST thread_poller_perf 00:11:32.481 ************************************ 00:11:32.481 13:37:26 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:32.481 [2024-11-06 13:37:26.333830] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:32.481 [2024-11-06 13:37:26.334651] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60983 ] 00:11:32.739 [2024-11-06 13:37:26.521277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.739 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:32.739 [2024-11-06 13:37:26.653516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.112 [2024-11-06T13:37:28.095Z] ====================================== 00:11:34.112 [2024-11-06T13:37:28.095Z] busy:2104361172 (cyc) 00:11:34.113 [2024-11-06T13:37:28.096Z] total_run_count: 4075000 00:11:34.113 [2024-11-06T13:37:28.096Z] tsc_hz: 2100000000 (cyc) 00:11:34.113 [2024-11-06T13:37:28.096Z] ====================================== 00:11:34.113 [2024-11-06T13:37:28.096Z] poller_cost: 516 (cyc), 245 (nsec) 00:11:34.113 00:11:34.113 real 0m1.651s 00:11:34.113 user 0m1.422s 00:11:34.113 sys 0m0.116s 00:11:34.113 13:37:27 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.113 ************************************ 00:11:34.113 END TEST thread_poller_perf 00:11:34.113 ************************************ 00:11:34.113 13:37:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:34.113 13:37:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:34.113 ************************************ 00:11:34.113 END TEST thread 00:11:34.113 ************************************ 00:11:34.113 00:11:34.113 real 0m3.688s 00:11:34.113 user 0m3.018s 00:11:34.113 sys 0m0.444s 00:11:34.113 13:37:27 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.113 13:37:27 thread -- common/autotest_common.sh@10 -- # set +x 00:11:34.113 13:37:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:34.113 13:37:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:34.113 13:37:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:34.113 13:37:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.113 13:37:28 -- common/autotest_common.sh@10 -- # set +x 00:11:34.113 ************************************ 00:11:34.113 START TEST app_cmdline 00:11:34.113 ************************************ 00:11:34.113 13:37:28 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:34.370 * Looking for test storage... 00:11:34.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:34.370 13:37:28 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:34.370 13:37:28 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:11:34.370 13:37:28 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:34.370 13:37:28 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:34.370 13:37:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.370 13:37:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.370 13:37:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.370 13:37:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.371 13:37:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:34.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.371 --rc genhtml_branch_coverage=1 00:11:34.371 --rc genhtml_function_coverage=1 00:11:34.371 --rc genhtml_legend=1 00:11:34.371 --rc geninfo_all_blocks=1 00:11:34.371 --rc geninfo_unexecuted_blocks=1 00:11:34.371 00:11:34.371 ' 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:34.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.371 --rc genhtml_branch_coverage=1 00:11:34.371 --rc genhtml_function_coverage=1 00:11:34.371 --rc genhtml_legend=1 00:11:34.371 --rc geninfo_all_blocks=1 00:11:34.371 --rc geninfo_unexecuted_blocks=1 00:11:34.371 00:11:34.371 ' 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:34.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.371 --rc genhtml_branch_coverage=1 00:11:34.371 --rc genhtml_function_coverage=1 00:11:34.371 --rc genhtml_legend=1 00:11:34.371 --rc geninfo_all_blocks=1 00:11:34.371 --rc geninfo_unexecuted_blocks=1 00:11:34.371 00:11:34.371 ' 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:34.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.371 --rc genhtml_branch_coverage=1 00:11:34.371 --rc genhtml_function_coverage=1 00:11:34.371 --rc genhtml_legend=1 00:11:34.371 --rc geninfo_all_blocks=1 00:11:34.371 --rc geninfo_unexecuted_blocks=1 00:11:34.371 00:11:34.371 ' 00:11:34.371 13:37:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:34.371 13:37:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61071 00:11:34.371 13:37:28 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:34.371 13:37:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61071 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 61071 ']' 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.371 13:37:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:34.371 [2024-11-06 13:37:28.334539] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:34.371 [2024-11-06 13:37:28.334694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:11:34.630 [2024-11-06 13:37:28.514333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.888 [2024-11-06 13:37:28.662018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.828 13:37:29 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.828 13:37:29 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:11:35.828 13:37:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:36.093 { 00:11:36.093 "version": "SPDK v25.01-pre git sha1 40c30569f", 00:11:36.093 "fields": { 00:11:36.093 "major": 25, 00:11:36.093 "minor": 1, 00:11:36.093 "patch": 0, 00:11:36.093 "suffix": "-pre", 00:11:36.093 "commit": "40c30569f" 00:11:36.093 } 00:11:36.093 } 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:36.093 13:37:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:36.093 13:37:29 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:36.352 request: 00:11:36.352 { 00:11:36.352 "method": "env_dpdk_get_mem_stats", 00:11:36.352 "req_id": 1 00:11:36.352 } 00:11:36.352 Got JSON-RPC error response 00:11:36.352 response: 00:11:36.352 { 00:11:36.352 "code": -32601, 00:11:36.352 "message": "Method not found" 00:11:36.352 } 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.352 13:37:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61071 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 61071 ']' 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 61071 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61071 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:36.352 killing process with pid 61071 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61071' 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@971 -- # kill 61071 00:11:36.352 13:37:30 app_cmdline -- common/autotest_common.sh@976 -- # wait 61071 00:11:39.640 ************************************ 00:11:39.640 END TEST app_cmdline 00:11:39.640 ************************************ 00:11:39.640 00:11:39.640 real 0m5.006s 00:11:39.640 user 0m5.268s 00:11:39.640 sys 0m0.675s 00:11:39.640 13:37:33 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.640 13:37:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:39.640 13:37:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:39.640 13:37:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:39.640 13:37:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.640 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:11:39.640 ************************************ 00:11:39.640 START TEST version 00:11:39.640 ************************************ 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:39.640 * Looking for test storage... 00:11:39.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1691 -- # lcov --version 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:39.640 13:37:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.640 13:37:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.640 13:37:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.640 13:37:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.640 13:37:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.640 13:37:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.640 13:37:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.640 13:37:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.640 13:37:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.640 13:37:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.640 13:37:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.640 13:37:33 version -- scripts/common.sh@344 -- # case "$op" in 00:11:39.640 13:37:33 version -- scripts/common.sh@345 -- # : 1 00:11:39.640 13:37:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.640 13:37:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.640 13:37:33 version -- scripts/common.sh@365 -- # decimal 1 00:11:39.640 13:37:33 version -- scripts/common.sh@353 -- # local d=1 00:11:39.640 13:37:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.640 13:37:33 version -- scripts/common.sh@355 -- # echo 1 00:11:39.640 13:37:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.640 13:37:33 version -- scripts/common.sh@366 -- # decimal 2 00:11:39.640 13:37:33 version -- scripts/common.sh@353 -- # local d=2 00:11:39.640 13:37:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.640 13:37:33 version -- scripts/common.sh@355 -- # echo 2 00:11:39.640 13:37:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.640 13:37:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.640 13:37:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.640 13:37:33 version -- scripts/common.sh@368 -- # return 0 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:39.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.640 --rc genhtml_branch_coverage=1 00:11:39.640 --rc genhtml_function_coverage=1 00:11:39.640 --rc genhtml_legend=1 00:11:39.640 --rc geninfo_all_blocks=1 00:11:39.640 --rc geninfo_unexecuted_blocks=1 00:11:39.640 00:11:39.640 ' 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:39.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.640 --rc genhtml_branch_coverage=1 00:11:39.640 --rc genhtml_function_coverage=1 00:11:39.640 --rc genhtml_legend=1 00:11:39.640 --rc geninfo_all_blocks=1 00:11:39.640 --rc geninfo_unexecuted_blocks=1 00:11:39.640 00:11:39.640 ' 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:39.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.640 --rc genhtml_branch_coverage=1 00:11:39.640 --rc genhtml_function_coverage=1 00:11:39.640 --rc genhtml_legend=1 00:11:39.640 --rc geninfo_all_blocks=1 00:11:39.640 --rc geninfo_unexecuted_blocks=1 00:11:39.640 00:11:39.640 ' 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:39.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.640 --rc genhtml_branch_coverage=1 00:11:39.640 --rc genhtml_function_coverage=1 00:11:39.640 --rc genhtml_legend=1 00:11:39.640 --rc geninfo_all_blocks=1 00:11:39.640 --rc geninfo_unexecuted_blocks=1 00:11:39.640 00:11:39.640 ' 00:11:39.640 13:37:33 version -- app/version.sh@17 -- # get_header_version major 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # cut -f2 00:11:39.640 13:37:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:39.640 13:37:33 version -- app/version.sh@17 -- # major=25 00:11:39.640 13:37:33 version -- app/version.sh@18 -- # get_header_version minor 00:11:39.640 13:37:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # cut -f2 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:39.640 13:37:33 version -- app/version.sh@18 -- # minor=1 00:11:39.640 13:37:33 version -- app/version.sh@19 -- # get_header_version patch 00:11:39.640 13:37:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # cut -f2 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:39.640 13:37:33 version -- app/version.sh@19 -- # patch=0 00:11:39.640 13:37:33 version -- app/version.sh@20 -- # get_header_version suffix 00:11:39.640 13:37:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # cut -f2 00:11:39.640 13:37:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:39.640 13:37:33 version -- app/version.sh@20 -- # suffix=-pre 00:11:39.640 13:37:33 version -- app/version.sh@22 -- # version=25.1 00:11:39.640 13:37:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:39.640 13:37:33 version -- app/version.sh@28 -- # version=25.1rc0 00:11:39.640 13:37:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:39.640 13:37:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:39.640 13:37:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:39.640 13:37:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:39.640 00:11:39.640 real 0m0.289s 00:11:39.640 user 0m0.175s 00:11:39.640 sys 0m0.165s 00:11:39.640 13:37:33 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.640 13:37:33 version -- common/autotest_common.sh@10 -- # set +x 00:11:39.640 ************************************ 00:11:39.640 END TEST version 00:11:39.640 ************************************ 00:11:39.640 13:37:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:39.640 13:37:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:39.640 13:37:33 -- spdk/autotest.sh@194 -- # uname -s 00:11:39.640 13:37:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:39.640 13:37:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:39.640 13:37:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:39.640 13:37:33 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:11:39.640 13:37:33 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:39.640 13:37:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:39.640 13:37:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.640 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:11:39.640 ************************************ 00:11:39.640 START TEST blockdev_nvme 00:11:39.640 ************************************ 00:11:39.640 13:37:33 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:39.640 * Looking for test storage... 00:11:39.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:39.640 13:37:33 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:39.640 13:37:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:11:39.640 13:37:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:39.640 13:37:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.640 13:37:33 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.641 13:37:33 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:11:39.641 13:37:33 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.641 13:37:33 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.641 --rc genhtml_branch_coverage=1 00:11:39.641 --rc genhtml_function_coverage=1 00:11:39.641 --rc genhtml_legend=1 00:11:39.641 --rc geninfo_all_blocks=1 00:11:39.641 --rc geninfo_unexecuted_blocks=1 00:11:39.641 00:11:39.641 ' 00:11:39.641 13:37:33 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.641 --rc genhtml_branch_coverage=1 00:11:39.641 --rc genhtml_function_coverage=1 00:11:39.641 --rc genhtml_legend=1 00:11:39.641 --rc geninfo_all_blocks=1 00:11:39.641 --rc geninfo_unexecuted_blocks=1 00:11:39.641 00:11:39.641 ' 00:11:39.641 13:37:33 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.641 --rc genhtml_branch_coverage=1 00:11:39.641 --rc genhtml_function_coverage=1 00:11:39.641 --rc genhtml_legend=1 00:11:39.641 --rc geninfo_all_blocks=1 00:11:39.641 --rc geninfo_unexecuted_blocks=1 00:11:39.641 00:11:39.641 ' 00:11:39.641 13:37:33 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.641 --rc genhtml_branch_coverage=1 00:11:39.641 --rc genhtml_function_coverage=1 00:11:39.641 --rc genhtml_legend=1 00:11:39.641 --rc geninfo_all_blocks=1 00:11:39.641 --rc geninfo_unexecuted_blocks=1 00:11:39.641 00:11:39.641 ' 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:39.641 13:37:33 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:11:39.641 13:37:33 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61266 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:39.900 13:37:33 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61266 00:11:39.900 13:37:33 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 61266 ']' 00:11:39.900 13:37:33 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.900 13:37:33 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:39.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.900 13:37:33 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.900 13:37:33 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:39.900 13:37:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.900 [2024-11-06 13:37:33.725388] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:39.900 [2024-11-06 13:37:33.725510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61266 ] 00:11:40.159 [2024-11-06 13:37:33.909320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.159 [2024-11-06 13:37:34.133771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.535 13:37:35 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.535 13:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.795 13:37:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.795 13:37:35 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:41.795 13:37:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.795 13:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.795 13:37:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.795 13:37:35 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:41.795 13:37:35 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:41.795 13:37:35 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:41.795 13:37:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.795 13:37:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.795 13:37:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.795 13:37:35 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:41.795 13:37:35 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:41.796 13:37:35 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "30e0732a-9e41-40f9-aaa1-9f1dba37657a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "30e0732a-9e41-40f9-aaa1-9f1dba37657a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d3bf2fe5-dfea-466d-a124-291387ff425d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d3bf2fe5-dfea-466d-a124-291387ff425d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "41545dc0-9117-48d1-863a-90357238ed86"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "41545dc0-9117-48d1-863a-90357238ed86",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "60d942fd-207f-4db8-a88f-5260e669f8cb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "60d942fd-207f-4db8-a88f-5260e669f8cb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "79353278-b44c-4b6c-a32f-3e3c8b115203"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "79353278-b44c-4b6c-a32f-3e3c8b115203",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8e5eb5fa-1e89-4e3b-b45e-041991915c73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8e5eb5fa-1e89-4e3b-b45e-041991915c73",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:41.796 13:37:35 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:41.796 13:37:35 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:41.796 13:37:35 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:41.796 13:37:35 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61266 00:11:41.796 13:37:35 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 61266 ']' 00:11:41.796 13:37:35 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 61266 00:11:41.796 13:37:35 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:11:41.796 13:37:35 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:41.796 13:37:35 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61266 00:11:42.054 killing process with pid 61266 00:11:42.054 13:37:35 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.054 13:37:35 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.054 13:37:35 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61266' 00:11:42.054 13:37:35 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 61266 00:11:42.054 13:37:35 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 61266 00:11:44.597 13:37:38 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:44.597 13:37:38 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:44.597 13:37:38 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:44.597 13:37:38 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.597 13:37:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:44.597 ************************************ 00:11:44.597 START TEST bdev_hello_world 00:11:44.597 ************************************ 00:11:44.597 13:37:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:44.855 [2024-11-06 13:37:38.617564] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:44.855 [2024-11-06 13:37:38.617984] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:11:44.855 [2024-11-06 13:37:38.820832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.113 [2024-11-06 13:37:38.984234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.048 [2024-11-06 13:37:39.696252] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:46.048 [2024-11-06 13:37:39.696311] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:46.048 [2024-11-06 13:37:39.696341] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:46.048 [2024-11-06 13:37:39.699606] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:46.048 [2024-11-06 13:37:39.700142] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:46.048 [2024-11-06 13:37:39.700169] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:46.048 [2024-11-06 13:37:39.700407] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:46.048 00:11:46.048 [2024-11-06 13:37:39.700435] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:47.423 00:11:47.423 real 0m2.517s 00:11:47.423 user 0m2.133s 00:11:47.423 sys 0m0.272s 00:11:47.423 ************************************ 00:11:47.423 END TEST bdev_hello_world 00:11:47.423 ************************************ 00:11:47.423 13:37:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:47.423 13:37:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:47.423 13:37:41 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:11:47.423 13:37:41 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:47.423 13:37:41 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:47.423 13:37:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.423 ************************************ 00:11:47.423 START TEST bdev_bounds 00:11:47.423 ************************************ 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:11:47.423 Process bdevio pid: 61415 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61415 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61415' 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61415 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61415 ']' 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:47.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:47.423 13:37:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:47.423 [2024-11-06 13:37:41.157268] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:47.423 [2024-11-06 13:37:41.157619] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61415 ] 00:11:47.423 [2024-11-06 13:37:41.348222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:47.682 [2024-11-06 13:37:41.527111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.682 [2024-11-06 13:37:41.527202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.682 [2024-11-06 13:37:41.527205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.617 13:37:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:48.617 13:37:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:11:48.617 13:37:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:48.617 I/O targets: 00:11:48.617 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:48.617 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:48.617 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:48.617 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:48.617 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:48.617 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:48.617 00:11:48.617 00:11:48.617 CUnit - A unit testing framework for C - Version 2.1-3 00:11:48.617 http://cunit.sourceforge.net/ 00:11:48.617 00:11:48.617 00:11:48.617 Suite: bdevio tests on: Nvme3n1 00:11:48.617 Test: blockdev write read block ...passed 00:11:48.617 Test: blockdev write zeroes read block ...passed 00:11:48.617 Test: blockdev write zeroes read no split ...passed 00:11:48.617 Test: blockdev write zeroes read split ...passed 00:11:48.617 Test: blockdev write zeroes read split partial ...passed 00:11:48.617 Test: blockdev reset ...[2024-11-06 13:37:42.509604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:48.617 [2024-11-06 13:37:42.513755] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:48.617 passed 00:11:48.617 Test: blockdev write read 8 blocks ...passed 00:11:48.617 Test: blockdev write read size > 128k ...passed 00:11:48.617 Test: blockdev write read invalid size ...passed 00:11:48.618 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.618 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.618 Test: blockdev write read max offset ...passed 00:11:48.618 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.618 Test: blockdev writev readv 8 blocks ...passed 00:11:48.618 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.618 Test: blockdev writev readv block ...passed 00:11:48.618 Test: blockdev writev readv size > 128k ...passed 00:11:48.618 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.618 Test: blockdev comparev and writev ...[2024-11-06 13:37:42.523516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba60a000 len:0x1000 00:11:48.618 [2024-11-06 13:37:42.523583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:48.618 passed 00:11:48.618 Test: blockdev nvme passthru rw ...passed 00:11:48.618 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.618 Test: blockdev nvme admin passthru ...[2024-11-06 13:37:42.524235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:48.618 [2024-11-06 13:37:42.524285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:48.618 passed 00:11:48.618 Test: blockdev copy ...passed 00:11:48.618 Suite: bdevio tests on: Nvme2n3 00:11:48.618 Test: blockdev write read block ...passed 00:11:48.618 Test: blockdev write zeroes read block ...passed 00:11:48.618 Test: blockdev write zeroes read no split ...passed 00:11:48.618 Test: blockdev write zeroes read split ...passed 00:11:48.877 Test: blockdev write zeroes read split partial ...passed 00:11:48.877 Test: blockdev reset ...[2024-11-06 13:37:42.606632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:48.877 passed 00:11:48.877 Test: blockdev write read 8 blocks ...[2024-11-06 13:37:42.611407] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:48.877 passed 00:11:48.877 Test: blockdev write read size > 128k ...passed 00:11:48.877 Test: blockdev write read invalid size ...passed 00:11:48.877 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.877 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.877 Test: blockdev write read max offset ...passed 00:11:48.877 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.877 Test: blockdev writev readv 8 blocks ...passed 00:11:48.877 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.877 Test: blockdev writev readv block ...passed 00:11:48.877 Test: blockdev writev readv size > 128k ...passed 00:11:48.877 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.877 Test: blockdev comparev and writev ...[2024-11-06 13:37:42.619120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29d006000 len:0x1000 00:11:48.877 [2024-11-06 13:37:42.619192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:48.877 passed 00:11:48.877 Test: blockdev nvme passthru rw ...passed 00:11:48.877 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.877 Test: blockdev nvme admin passthru ...[2024-11-06 13:37:42.619828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:48.877 [2024-11-06 13:37:42.619865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:48.877 passed 00:11:48.877 Test: blockdev copy ...passed 00:11:48.877 Suite: bdevio tests on: Nvme2n2 00:11:48.877 Test: blockdev write read block ...passed 00:11:48.877 Test: blockdev write zeroes read block ...passed 00:11:48.877 Test: blockdev write zeroes read no split ...passed 00:11:48.877 Test: blockdev write zeroes read split ...passed 00:11:48.877 Test: blockdev write zeroes read split partial ...passed 00:11:48.877 Test: blockdev reset ...[2024-11-06 13:37:42.734330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:48.877 [2024-11-06 13:37:42.739579] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:48.877 passed 00:11:48.877 Test: blockdev write read 8 blocks ...passed 00:11:48.877 Test: blockdev write read size > 128k ...passed 00:11:48.877 Test: blockdev write read invalid size ...passed 00:11:48.877 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.877 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.877 Test: blockdev write read max offset ...passed 00:11:48.877 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.877 Test: blockdev writev readv 8 blocks ...passed 00:11:48.877 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.877 Test: blockdev writev readv block ...passed 00:11:48.877 Test: blockdev writev readv size > 128k ...passed 00:11:48.877 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.877 Test: blockdev comparev and writev ...[2024-11-06 13:37:42.750327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:11:48.877 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ca63c000 len:0x1000 00:11:48.877 [2024-11-06 13:37:42.750548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:48.877 passed 00:11:48.877 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.877 Test: blockdev nvme admin passthru ...[2024-11-06 13:37:42.751302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:48.877 [2024-11-06 13:37:42.751347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:48.877 passed 00:11:48.877 Test: blockdev copy ...passed 00:11:48.877 Suite: bdevio tests on: Nvme2n1 00:11:48.877 Test: blockdev write read block ...passed 00:11:48.877 Test: blockdev write zeroes read block ...passed 00:11:48.877 Test: blockdev write zeroes read no split ...passed 00:11:48.877 Test: blockdev write zeroes read split ...passed 00:11:49.136 Test: blockdev write zeroes read split partial ...passed 00:11:49.136 Test: blockdev reset ...[2024-11-06 13:37:42.865957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:49.136 passed 00:11:49.136 Test: blockdev write read 8 blocks ...[2024-11-06 13:37:42.870912] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:49.136 passed 00:11:49.136 Test: blockdev write read size > 128k ...passed 00:11:49.136 Test: blockdev write read invalid size ...passed 00:11:49.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.136 Test: blockdev write read max offset ...passed 00:11:49.136 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.136 Test: blockdev writev readv 8 blocks ...passed 00:11:49.136 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.136 Test: blockdev writev readv block ...passed 00:11:49.136 Test: blockdev writev readv size > 128k ...passed 00:11:49.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.136 Test: blockdev comparev and writev ...[2024-11-06 13:37:42.880425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:11:49.136 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ca638000 len:0x1000 00:11:49.136 [2024-11-06 13:37:42.880619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:49.136 passed 00:11:49.136 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.136 Test: blockdev nvme admin passthru ...[2024-11-06 13:37:42.881458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:49.136 [2024-11-06 13:37:42.881501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:49.136 passed 00:11:49.136 Test: blockdev copy ...passed 00:11:49.136 Suite: bdevio tests on: Nvme1n1 00:11:49.136 Test: blockdev write read block ...passed 00:11:49.136 Test: blockdev write zeroes read block ...passed 00:11:49.136 Test: blockdev write zeroes read no split ...passed 00:11:49.136 Test: blockdev write zeroes read split ...passed 00:11:49.136 Test: blockdev write zeroes read split partial ...passed 00:11:49.136 Test: blockdev reset ...[2024-11-06 13:37:42.989347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:49.136 [2024-11-06 13:37:42.994101] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:49.136 passed 00:11:49.136 Test: blockdev write read 8 blocks ...passed 00:11:49.136 Test: blockdev write read size > 128k ...passed 00:11:49.136 Test: blockdev write read invalid size ...passed 00:11:49.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.136 Test: blockdev write read max offset ...passed 00:11:49.136 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.136 Test: blockdev writev readv 8 blocks ...passed 00:11:49.136 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.136 Test: blockdev writev readv block ...passed 00:11:49.136 Test: blockdev writev readv size > 128k ...passed 00:11:49.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.136 Test: blockdev comparev and writev ...[2024-11-06 13:37:43.005621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca634000 len:0x1000 00:11:49.136 [2024-11-06 13:37:43.005831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:11:49.136 Test: blockdev nvme passthru rw ...passed 00:11:49.136 Test: blockdev nvme passthru vendor specific ...0 sqhd:0018 p:1 m:0 dnr:1 00:11:49.136 [2024-11-06 13:37:43.006575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:49.136 [2024-11-06 13:37:43.006617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:49.136 passed 00:11:49.136 Test: blockdev nvme admin passthru ...passed 00:11:49.137 Test: blockdev copy ...passed 00:11:49.137 Suite: bdevio tests on: Nvme0n1 00:11:49.137 Test: blockdev write read block ...passed 00:11:49.137 Test: blockdev write zeroes read block ...passed 00:11:49.137 Test: blockdev write zeroes read no split ...passed 00:11:49.137 Test: blockdev write zeroes read split ...passed 00:11:49.137 Test: blockdev write zeroes read split partial ...passed 00:11:49.137 Test: blockdev reset ...[2024-11-06 13:37:43.111496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:49.137 passed 00:11:49.137 Test: blockdev write read 8 blocks ...[2024-11-06 13:37:43.116101] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:49.137 passed 00:11:49.395 Test: blockdev write read size > 128k ...passed 00:11:49.395 Test: blockdev write read invalid size ...passed 00:11:49.395 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.395 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.395 Test: blockdev write read max offset ...passed 00:11:49.395 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.395 Test: blockdev writev readv 8 blocks ...passed 00:11:49.395 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.395 Test: blockdev writev readv block ...passed 00:11:49.395 Test: blockdev writev readv size > 128k ...passed 00:11:49.395 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.395 Test: blockdev comparev and writev ...passed 00:11:49.395 Test: blockdev nvme passthru rw ...[2024-11-06 13:37:43.125414] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:49.395 separate metadata which is not supported yet. 00:11:49.395 passed 00:11:49.395 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.395 Test: blockdev nvme admin passthru ...[2024-11-06 13:37:43.126303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:49.395 [2024-11-06 13:37:43.126446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:49.395 passed 00:11:49.395 Test: blockdev copy ...passed 00:11:49.395 00:11:49.395 Run Summary: Type Total Ran Passed Failed Inactive 00:11:49.395 suites 6 6 n/a 0 0 00:11:49.395 tests 138 138 138 0 0 00:11:49.395 asserts 893 893 893 0 n/a 00:11:49.395 00:11:49.395 Elapsed time = 1.944 seconds 00:11:49.395 0 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61415 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61415 ']' 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61415 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61415 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61415' 00:11:49.395 killing process with pid 61415 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61415 00:11:49.395 13:37:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61415 00:11:50.767 13:37:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:50.767 00:11:50.767 real 0m3.345s 00:11:50.767 user 0m8.742s 00:11:50.767 sys 0m0.448s 00:11:50.767 13:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.767 ************************************ 00:11:50.767 END TEST bdev_bounds 00:11:50.767 ************************************ 00:11:50.767 13:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 13:37:44 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:50.767 13:37:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:50.767 13:37:44 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.767 13:37:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 ************************************ 00:11:50.767 START TEST bdev_nbd 00:11:50.767 ************************************ 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61486 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61486 /var/tmp/spdk-nbd.sock 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61486 ']' 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:50.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:50.767 13:37:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:50.768 [2024-11-06 13:37:44.570364] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:11:50.768 [2024-11-06 13:37:44.570524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.768 [2024-11-06 13:37:44.749058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.024 [2024-11-06 13:37:44.883475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.957 1+0 records in 00:11:51.957 1+0 records out 00:11:51.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652178 s, 6.3 MB/s 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:51.957 13:37:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.523 1+0 records in 00:11:52.523 1+0 records out 00:11:52.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589214 s, 7.0 MB/s 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:52.523 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.782 1+0 records in 00:11:52.782 1+0 records out 00:11:52.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783239 s, 5.2 MB/s 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:52.782 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.040 1+0 records in 00:11:53.040 1+0 records out 00:11:53.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613329 s, 6.7 MB/s 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:53.040 13:37:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.309 1+0 records in 00:11:53.309 1+0 records out 00:11:53.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756614 s, 5.4 MB/s 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:53.309 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:53.875 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.876 1+0 records in 00:11:53.876 1+0 records out 00:11:53.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653443 s, 6.3 MB/s 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd0", 00:11:53.876 "bdev_name": "Nvme0n1" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd1", 00:11:53.876 "bdev_name": "Nvme1n1" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd2", 00:11:53.876 "bdev_name": "Nvme2n1" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd3", 00:11:53.876 "bdev_name": "Nvme2n2" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd4", 00:11:53.876 "bdev_name": "Nvme2n3" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd5", 00:11:53.876 "bdev_name": "Nvme3n1" 00:11:53.876 } 00:11:53.876 ]' 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd0", 00:11:53.876 "bdev_name": "Nvme0n1" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd1", 00:11:53.876 "bdev_name": "Nvme1n1" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd2", 00:11:53.876 "bdev_name": "Nvme2n1" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd3", 00:11:53.876 "bdev_name": "Nvme2n2" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd4", 00:11:53.876 "bdev_name": "Nvme2n3" 00:11:53.876 }, 00:11:53.876 { 00:11:53.876 "nbd_device": "/dev/nbd5", 00:11:53.876 "bdev_name": "Nvme3n1" 00:11:53.876 } 00:11:53.876 ]' 00:11:53.876 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:54.134 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:54.134 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.134 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:54.134 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.134 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:54.134 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.134 13:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.393 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.651 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.910 13:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.169 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.428 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.687 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:56.254 13:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:56.512 /dev/nbd0 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.512 1+0 records in 00:11:56.512 1+0 records out 00:11:56.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783272 s, 5.2 MB/s 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:56.512 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:11:56.771 /dev/nbd1 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.771 1+0 records in 00:11:56.771 1+0 records out 00:11:56.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564036 s, 7.3 MB/s 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:56.771 13:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:11:57.029 /dev/nbd10 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.288 1+0 records in 00:11:57.288 1+0 records out 00:11:57.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597794 s, 6.9 MB/s 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:57.288 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:11:57.547 /dev/nbd11 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.547 1+0 records in 00:11:57.547 1+0 records out 00:11:57.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615999 s, 6.6 MB/s 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:57.547 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:11:57.807 /dev/nbd12 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.807 1+0 records in 00:11:57.807 1+0 records out 00:11:57.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071012 s, 5.8 MB/s 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:57.807 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:11:58.066 /dev/nbd13 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.066 1+0 records in 00:11:58.066 1+0 records out 00:11:58.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809871 s, 5.1 MB/s 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.066 13:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:58.325 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:58.325 { 00:11:58.325 "nbd_device": "/dev/nbd0", 00:11:58.325 "bdev_name": "Nvme0n1" 00:11:58.325 }, 00:11:58.325 { 00:11:58.325 "nbd_device": "/dev/nbd1", 00:11:58.325 "bdev_name": "Nvme1n1" 00:11:58.325 }, 00:11:58.325 { 00:11:58.325 "nbd_device": "/dev/nbd10", 00:11:58.325 "bdev_name": "Nvme2n1" 00:11:58.325 }, 00:11:58.325 { 00:11:58.325 "nbd_device": "/dev/nbd11", 00:11:58.325 "bdev_name": "Nvme2n2" 00:11:58.325 }, 00:11:58.325 { 00:11:58.326 "nbd_device": "/dev/nbd12", 00:11:58.326 "bdev_name": "Nvme2n3" 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "nbd_device": "/dev/nbd13", 00:11:58.326 "bdev_name": "Nvme3n1" 00:11:58.326 } 00:11:58.326 ]' 00:11:58.326 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:58.326 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:58.326 { 00:11:58.326 "nbd_device": "/dev/nbd0", 00:11:58.326 "bdev_name": "Nvme0n1" 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "nbd_device": "/dev/nbd1", 00:11:58.326 "bdev_name": "Nvme1n1" 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "nbd_device": "/dev/nbd10", 00:11:58.326 "bdev_name": "Nvme2n1" 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "nbd_device": "/dev/nbd11", 00:11:58.326 "bdev_name": "Nvme2n2" 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "nbd_device": "/dev/nbd12", 00:11:58.326 "bdev_name": "Nvme2n3" 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "nbd_device": "/dev/nbd13", 00:11:58.326 "bdev_name": "Nvme3n1" 00:11:58.326 } 00:11:58.326 ]' 00:11:58.584 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:58.584 /dev/nbd1 00:11:58.584 /dev/nbd10 00:11:58.584 /dev/nbd11 00:11:58.584 /dev/nbd12 00:11:58.584 /dev/nbd13' 00:11:58.584 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:58.584 /dev/nbd1 00:11:58.584 /dev/nbd10 00:11:58.584 /dev/nbd11 00:11:58.584 /dev/nbd12 00:11:58.584 /dev/nbd13' 00:11:58.584 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:58.585 256+0 records in 00:11:58.585 256+0 records out 00:11:58.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00799675 s, 131 MB/s 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:58.585 256+0 records in 00:11:58.585 256+0 records out 00:11:58.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129836 s, 8.1 MB/s 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.585 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:58.843 256+0 records in 00:11:58.843 256+0 records out 00:11:58.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140917 s, 7.4 MB/s 00:11:58.843 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.843 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:58.843 256+0 records in 00:11:58.843 256+0 records out 00:11:58.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135424 s, 7.7 MB/s 00:11:58.843 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.843 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:59.101 256+0 records in 00:11:59.101 256+0 records out 00:11:59.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131033 s, 8.0 MB/s 00:11:59.101 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.101 13:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:59.101 256+0 records in 00:11:59.101 256+0 records out 00:11:59.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132396 s, 7.9 MB/s 00:11:59.102 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.102 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:59.360 256+0 records in 00:11:59.360 256+0 records out 00:11:59.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134826 s, 7.8 MB/s 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.360 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.619 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.878 13:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.137 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.704 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.963 13:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:01.530 malloc_lvol_verify 00:12:01.530 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:01.788 752d6994-e9df-4ef2-9a3a-484cbe7bfbc2 00:12:01.788 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:02.046 dcb2d008-d002-4233-a802-7bb5f096cc79 00:12:02.046 13:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:02.305 /dev/nbd0 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:02.305 mke2fs 1.47.0 (5-Feb-2023) 00:12:02.305 Discarding device blocks: 0/4096 done 00:12:02.305 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:02.305 00:12:02.305 Allocating group tables: 0/1 done 00:12:02.305 Writing inode tables: 0/1 done 00:12:02.305 Creating journal (1024 blocks): done 00:12:02.305 Writing superblocks and filesystem accounting information: 0/1 done 00:12:02.305 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.305 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61486 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61486 ']' 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61486 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61486 00:12:02.873 killing process with pid 61486 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61486' 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61486 00:12:02.873 13:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61486 00:12:04.274 13:37:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:04.274 00:12:04.274 real 0m13.500s 00:12:04.274 user 0m18.150s 00:12:04.274 sys 0m5.329s 00:12:04.274 ************************************ 00:12:04.274 END TEST bdev_nbd 00:12:04.274 ************************************ 00:12:04.274 13:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.274 13:37:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:04.274 13:37:58 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:12:04.274 13:37:58 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:12:04.274 skipping fio tests on NVMe due to multi-ns failures. 00:12:04.274 13:37:58 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:04.274 13:37:58 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:04.274 13:37:58 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:04.274 13:37:58 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:12:04.274 13:37:58 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.274 13:37:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.274 ************************************ 00:12:04.274 START TEST bdev_verify 00:12:04.274 ************************************ 00:12:04.274 13:37:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:04.274 [2024-11-06 13:37:58.153317] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:04.274 [2024-11-06 13:37:58.153489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61902 ] 00:12:04.532 [2024-11-06 13:37:58.358397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:04.791 [2024-11-06 13:37:58.535504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.791 [2024-11-06 13:37:58.535520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.357 Running I/O for 5 seconds... 00:12:07.666 18688.00 IOPS, 73.00 MiB/s [2024-11-06T13:38:02.584Z] 19040.00 IOPS, 74.38 MiB/s [2024-11-06T13:38:03.520Z] 18282.67 IOPS, 71.42 MiB/s [2024-11-06T13:38:04.456Z] 18160.00 IOPS, 70.94 MiB/s [2024-11-06T13:38:04.456Z] 18470.40 IOPS, 72.15 MiB/s 00:12:10.473 Latency(us) 00:12:10.473 [2024-11-06T13:38:04.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.473 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x0 length 0xbd0bd 00:12:10.473 Nvme0n1 : 5.06 1554.36 6.07 0.00 0.00 81972.73 9736.78 78393.54 00:12:10.473 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:10.473 Nvme0n1 : 5.08 1474.15 5.76 0.00 0.00 86307.96 9861.61 82887.44 00:12:10.473 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x0 length 0xa0000 00:12:10.473 Nvme1n1 : 5.07 1553.87 6.07 0.00 0.00 81884.99 10173.68 73400.32 00:12:10.473 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0xa0000 length 0xa0000 00:12:10.473 Nvme1n1 : 5.10 1482.09 5.79 0.00 0.00 85938.17 10236.10 79891.50 00:12:10.473 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x0 length 0x80000 00:12:10.473 Nvme2n1 : 5.07 1553.39 6.07 0.00 0.00 81786.85 9924.02 71403.03 00:12:10.473 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x80000 length 0x80000 00:12:10.473 Nvme2n1 : 5.10 1481.25 5.79 0.00 0.00 85777.18 11796.48 76895.57 00:12:10.473 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x0 length 0x80000 00:12:10.473 Nvme2n2 : 5.08 1561.72 6.10 0.00 0.00 81408.39 10673.01 73400.32 00:12:10.473 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x80000 length 0x80000 00:12:10.473 Nvme2n2 : 5.10 1480.87 5.78 0.00 0.00 85624.79 11671.65 74398.96 00:12:10.473 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x0 length 0x80000 00:12:10.473 Nvme2n3 : 5.08 1561.31 6.10 0.00 0.00 81271.50 10423.34 76396.25 00:12:10.473 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x80000 length 0x80000 00:12:10.473 Nvme2n3 : 5.10 1480.51 5.78 0.00 0.00 85520.68 11047.50 79392.18 00:12:10.473 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x0 length 0x20000 00:12:10.473 Nvme3n1 : 5.08 1560.84 6.10 0.00 0.00 81116.99 9674.36 78393.54 00:12:10.473 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:10.473 Verification LBA range: start 0x20000 length 0x20000 00:12:10.473 Nvme3n1 : 5.10 1480.13 5.78 0.00 0.00 85421.06 10673.01 82887.44 00:12:10.473 [2024-11-06T13:38:04.456Z] =================================================================================================================== 00:12:10.473 [2024-11-06T13:38:04.456Z] Total : 18224.48 71.19 0.00 0.00 83619.05 9674.36 82887.44 00:12:12.375 00:12:12.375 real 0m8.026s 00:12:12.375 user 0m14.722s 00:12:12.375 sys 0m0.340s 00:12:12.375 13:38:06 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.375 13:38:06 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:12.375 ************************************ 00:12:12.375 END TEST bdev_verify 00:12:12.375 ************************************ 00:12:12.375 13:38:06 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:12.375 13:38:06 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:12:12.375 13:38:06 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.375 13:38:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:12.375 ************************************ 00:12:12.375 START TEST bdev_verify_big_io 00:12:12.375 ************************************ 00:12:12.375 13:38:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:12.375 [2024-11-06 13:38:06.232297] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:12.375 [2024-11-06 13:38:06.232479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62006 ] 00:12:12.633 [2024-11-06 13:38:06.425104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:12.633 [2024-11-06 13:38:06.572155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.633 [2024-11-06 13:38:06.572185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.569 Running I/O for 5 seconds... 00:12:19.681 1832.00 IOPS, 114.50 MiB/s [2024-11-06T13:38:13.664Z] 2409.00 IOPS, 150.56 MiB/s 00:12:19.681 Latency(us) 00:12:19.681 [2024-11-06T13:38:13.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.681 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.681 Verification LBA range: start 0x0 length 0xbd0b 00:12:19.682 Nvme0n1 : 5.70 89.77 5.61 0.00 0.00 1387578.39 26713.72 1390112.18 00:12:19.682 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:19.682 Nvme0n1 : 5.71 89.60 5.60 0.00 0.00 1398107.43 36200.84 1422068.78 00:12:19.682 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x0 length 0xa000 00:12:19.682 Nvme1n1 : 5.73 92.70 5.79 0.00 0.00 1312870.45 22219.82 1422068.78 00:12:19.682 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0xa000 length 0xa000 00:12:19.682 Nvme1n1 : 5.72 89.56 5.60 0.00 0.00 1360592.46 62165.58 1414079.63 00:12:19.682 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x0 length 0x8000 00:12:19.682 Nvme2n1 : 5.73 92.50 5.78 0.00 0.00 1281399.68 22719.15 1454025.39 00:12:19.682 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x8000 length 0x8000 00:12:19.682 Nvme2n1 : 5.72 89.51 5.59 0.00 0.00 1328069.00 62415.24 1398101.33 00:12:19.682 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x0 length 0x8000 00:12:19.682 Nvme2n2 : 5.73 92.47 5.78 0.00 0.00 1246967.19 22968.81 1470003.69 00:12:19.682 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x8000 length 0x8000 00:12:19.682 Nvme2n2 : 5.78 92.31 5.77 0.00 0.00 1251703.70 61666.26 1430057.94 00:12:19.682 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x0 length 0x8000 00:12:19.682 Nvme2n3 : 5.79 99.53 6.22 0.00 0.00 1132219.79 52428.80 1517938.59 00:12:19.682 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x8000 length 0x8000 00:12:19.682 Nvme2n3 : 5.79 93.91 5.87 0.00 0.00 1200495.61 65411.17 1509949.44 00:12:19.682 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x0 length 0x2000 00:12:19.682 Nvme3n1 : 5.81 110.14 6.88 0.00 0.00 1000554.20 11484.40 1549895.19 00:12:19.682 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.682 Verification LBA range: start 0x2000 length 0x2000 00:12:19.682 Nvme3n1 : 5.83 106.89 6.68 0.00 0.00 1034570.69 13981.01 1541906.04 00:12:19.682 [2024-11-06T13:38:13.665Z] =================================================================================================================== 00:12:19.682 [2024-11-06T13:38:13.665Z] Total : 1138.90 71.18 0.00 0.00 1235290.52 11484.40 1549895.19 00:12:21.587 00:12:21.587 real 0m8.988s 00:12:21.587 user 0m16.661s 00:12:21.587 sys 0m0.362s 00:12:21.587 13:38:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:21.587 13:38:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.587 ************************************ 00:12:21.587 END TEST bdev_verify_big_io 00:12:21.587 ************************************ 00:12:21.587 13:38:15 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.587 13:38:15 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:12:21.587 13:38:15 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:21.587 13:38:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.587 ************************************ 00:12:21.587 START TEST bdev_write_zeroes 00:12:21.587 ************************************ 00:12:21.587 13:38:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.587 [2024-11-06 13:38:15.277830] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:21.587 [2024-11-06 13:38:15.277999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62120 ] 00:12:21.587 [2024-11-06 13:38:15.473450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.845 [2024-11-06 13:38:15.593519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.412 Running I/O for 1 seconds... 00:12:23.785 46400.00 IOPS, 181.25 MiB/s 00:12:23.785 Latency(us) 00:12:23.785 [2024-11-06T13:38:17.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.785 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:23.785 Nvme0n1 : 1.03 7735.20 30.22 0.00 0.00 16504.90 11297.16 34702.87 00:12:23.785 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:23.785 Nvme1n1 : 1.03 7721.70 30.16 0.00 0.00 16507.61 11609.23 34203.55 00:12:23.785 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:23.785 Nvme2n1 : 1.03 7706.87 30.10 0.00 0.00 16486.05 11109.91 32955.25 00:12:23.785 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:23.785 Nvme2n2 : 1.03 7693.37 30.05 0.00 0.00 16481.85 10985.08 30208.98 00:12:23.785 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:23.785 Nvme2n3 : 1.03 7681.54 30.01 0.00 0.00 16428.41 7583.45 32705.58 00:12:23.785 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:23.785 Nvme3n1 : 1.03 7608.14 29.72 0.00 0.00 16552.90 12420.63 34952.53 00:12:23.785 [2024-11-06T13:38:17.768Z] =================================================================================================================== 00:12:23.785 [2024-11-06T13:38:17.768Z] Total : 46146.83 180.26 0.00 0.00 16493.54 7583.45 34952.53 00:12:25.162 00:12:25.162 real 0m3.628s 00:12:25.162 user 0m3.197s 00:12:25.162 sys 0m0.307s 00:12:25.162 13:38:18 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.162 13:38:18 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:25.162 ************************************ 00:12:25.162 END TEST bdev_write_zeroes 00:12:25.162 ************************************ 00:12:25.162 13:38:18 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:25.162 13:38:18 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:12:25.162 13:38:18 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.162 13:38:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.162 ************************************ 00:12:25.162 START TEST bdev_json_nonenclosed 00:12:25.162 ************************************ 00:12:25.162 13:38:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:25.162 [2024-11-06 13:38:18.940439] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:25.162 [2024-11-06 13:38:18.940570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62179 ] 00:12:25.162 [2024-11-06 13:38:19.115641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.421 [2024-11-06 13:38:19.240163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.421 [2024-11-06 13:38:19.240264] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:25.421 [2024-11-06 13:38:19.240287] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:25.421 [2024-11-06 13:38:19.240299] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:25.679 00:12:25.679 real 0m0.671s 00:12:25.679 user 0m0.424s 00:12:25.679 sys 0m0.141s 00:12:25.679 13:38:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.679 13:38:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:25.679 ************************************ 00:12:25.679 END TEST bdev_json_nonenclosed 00:12:25.679 ************************************ 00:12:25.680 13:38:19 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:25.680 13:38:19 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:12:25.680 13:38:19 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.680 13:38:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.680 ************************************ 00:12:25.680 START TEST bdev_json_nonarray 00:12:25.680 ************************************ 00:12:25.680 13:38:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:26.039 [2024-11-06 13:38:19.720574] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:26.039 [2024-11-06 13:38:19.720755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:12:26.039 [2024-11-06 13:38:19.911468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.320 [2024-11-06 13:38:20.041121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.320 [2024-11-06 13:38:20.041231] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:26.320 [2024-11-06 13:38:20.041255] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:26.320 [2024-11-06 13:38:20.041267] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:26.579 00:12:26.579 real 0m0.745s 00:12:26.579 user 0m0.469s 00:12:26.579 sys 0m0.170s 00:12:26.579 13:38:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.579 ************************************ 00:12:26.579 END TEST bdev_json_nonarray 00:12:26.579 13:38:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:26.579 ************************************ 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:12:26.579 13:38:20 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:12:26.579 00:12:26.579 real 0m46.936s 00:12:26.579 user 1m9.717s 00:12:26.579 sys 0m8.437s 00:12:26.579 13:38:20 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.579 13:38:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.579 ************************************ 00:12:26.579 END TEST blockdev_nvme 00:12:26.579 ************************************ 00:12:26.579 13:38:20 -- spdk/autotest.sh@209 -- # uname -s 00:12:26.579 13:38:20 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:12:26.579 13:38:20 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:26.579 13:38:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:26.579 13:38:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.579 13:38:20 -- common/autotest_common.sh@10 -- # set +x 00:12:26.579 ************************************ 00:12:26.580 START TEST blockdev_nvme_gpt 00:12:26.580 ************************************ 00:12:26.580 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:26.580 * Looking for test storage... 00:12:26.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:26.580 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:26.580 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:12:26.580 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:26.839 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:12:26.839 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.840 13:38:20 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.840 --rc genhtml_branch_coverage=1 00:12:26.840 --rc genhtml_function_coverage=1 00:12:26.840 --rc genhtml_legend=1 00:12:26.840 --rc geninfo_all_blocks=1 00:12:26.840 --rc geninfo_unexecuted_blocks=1 00:12:26.840 00:12:26.840 ' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.840 --rc genhtml_branch_coverage=1 00:12:26.840 --rc genhtml_function_coverage=1 00:12:26.840 --rc genhtml_legend=1 00:12:26.840 --rc geninfo_all_blocks=1 00:12:26.840 --rc geninfo_unexecuted_blocks=1 00:12:26.840 00:12:26.840 ' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.840 --rc genhtml_branch_coverage=1 00:12:26.840 --rc genhtml_function_coverage=1 00:12:26.840 --rc genhtml_legend=1 00:12:26.840 --rc geninfo_all_blocks=1 00:12:26.840 --rc geninfo_unexecuted_blocks=1 00:12:26.840 00:12:26.840 ' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:26.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.840 --rc genhtml_branch_coverage=1 00:12:26.840 --rc genhtml_function_coverage=1 00:12:26.840 --rc genhtml_legend=1 00:12:26.840 --rc geninfo_all_blocks=1 00:12:26.840 --rc geninfo_unexecuted_blocks=1 00:12:26.840 00:12:26.840 ' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62294 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62294 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 62294 ']' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.840 13:38:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:26.840 13:38:20 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:26.840 [2024-11-06 13:38:20.793250] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:26.840 [2024-11-06 13:38:20.793425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62294 ] 00:12:27.098 [2024-11-06 13:38:20.989231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.357 [2024-11-06 13:38:21.114978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.293 13:38:22 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:28.293 13:38:22 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:12:28.293 13:38:22 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:12:28.293 13:38:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:12:28.293 13:38:22 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:28.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:28.809 Waiting for block devices as requested 00:12:28.809 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.067 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.067 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.325 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:34.592 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:12:34.592 BYT; 00:12:34.592 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:12:34.592 BYT; 00:12:34.592 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:12:34.592 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:12:34.592 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:12:34.592 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:34.592 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:34.592 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:12:34.592 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:34.593 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:34.593 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:34.593 13:38:28 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:34.593 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:34.593 13:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:12:35.527 The operation has completed successfully. 00:12:35.527 13:38:29 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:12:36.494 The operation has completed successfully. 00:12:36.494 13:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:37.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:37.645 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.903 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.903 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.903 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.903 13:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:12:37.903 13:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.903 13:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:37.903 [] 00:12:37.903 13:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.903 13:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:12:37.903 13:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:12:37.903 13:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:37.903 13:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:38.161 13:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:38.161 13:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.161 13:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:12:38.419 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.419 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.678 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.678 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:12:38.678 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:12:38.678 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b4bd17c8-78e1-4eca-8bfb-4bd32d7e96a9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b4bd17c8-78e1-4eca-8bfb-4bd32d7e96a9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8976afb8-cdec-4a28-b665-5e04a85bc893"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8976afb8-cdec-4a28-b665-5e04a85bc893",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1aad2977-bb2b-4dec-bfb3-adf1836def32"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1aad2977-bb2b-4dec-bfb3-adf1836def32",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "93b4bf01-3dbc-464e-bab6-f59d94180718"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "93b4bf01-3dbc-464e-bab6-f59d94180718",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1d2e3762-c605-4893-9200-aa3ab912616d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1d2e3762-c605-4893-9200-aa3ab912616d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:38.678 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:12:38.678 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:12:38.678 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:12:38.678 13:38:32 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62294 00:12:38.678 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 62294 ']' 00:12:38.678 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 62294 00:12:38.678 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:12:38.678 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.678 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62294 00:12:38.679 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.679 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.679 killing process with pid 62294 00:12:38.679 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62294' 00:12:38.679 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 62294 00:12:38.679 13:38:32 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 62294 00:12:41.209 13:38:35 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:41.209 13:38:35 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:41.209 13:38:35 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:41.209 13:38:35 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:41.209 13:38:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:41.209 ************************************ 00:12:41.209 START TEST bdev_hello_world 00:12:41.209 ************************************ 00:12:41.209 13:38:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:41.467 [2024-11-06 13:38:35.237546] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:41.467 [2024-11-06 13:38:35.237790] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62937 ] 00:12:41.467 [2024-11-06 13:38:35.430767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.726 [2024-11-06 13:38:35.560706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.309 [2024-11-06 13:38:36.272317] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:42.309 [2024-11-06 13:38:36.272378] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:42.309 [2024-11-06 13:38:36.272413] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:42.309 [2024-11-06 13:38:36.275989] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:42.309 [2024-11-06 13:38:36.276529] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:42.309 [2024-11-06 13:38:36.276560] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:42.309 [2024-11-06 13:38:36.276733] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:42.309 00:12:42.309 [2024-11-06 13:38:36.276759] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:43.683 00:12:43.683 real 0m2.369s 00:12:43.683 user 0m1.980s 00:12:43.683 sys 0m0.277s 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:43.683 ************************************ 00:12:43.683 END TEST bdev_hello_world 00:12:43.683 ************************************ 00:12:43.683 13:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:12:43.683 13:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:43.683 13:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:43.683 13:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:43.683 ************************************ 00:12:43.683 START TEST bdev_bounds 00:12:43.683 ************************************ 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62979 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62979' 00:12:43.683 Process bdevio pid: 62979 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62979 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 62979 ']' 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:43.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:43.683 13:38:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:43.683 [2024-11-06 13:38:37.642416] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:43.683 [2024-11-06 13:38:37.642557] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62979 ] 00:12:43.942 [2024-11-06 13:38:37.819104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:44.200 [2024-11-06 13:38:37.949942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.200 [2024-11-06 13:38:37.950058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.200 [2024-11-06 13:38:37.950009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.765 13:38:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:44.765 13:38:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:12:44.765 13:38:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:45.024 I/O targets: 00:12:45.024 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:45.024 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:12:45.024 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:12:45.024 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:45.024 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:45.024 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:45.024 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:45.024 00:12:45.024 00:12:45.024 CUnit - A unit testing framework for C - Version 2.1-3 00:12:45.024 http://cunit.sourceforge.net/ 00:12:45.024 00:12:45.024 00:12:45.024 Suite: bdevio tests on: Nvme3n1 00:12:45.024 Test: blockdev write read block ...passed 00:12:45.024 Test: blockdev write zeroes read block ...passed 00:12:45.024 Test: blockdev write zeroes read no split ...passed 00:12:45.024 Test: blockdev write zeroes read split ...passed 00:12:45.024 Test: blockdev write zeroes read split partial ...passed 00:12:45.024 Test: blockdev reset ...[2024-11-06 13:38:38.912465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:45.024 [2024-11-06 13:38:38.916771] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:45.024 passed 00:12:45.024 Test: blockdev write read 8 blocks ...passed 00:12:45.024 Test: blockdev write read size > 128k ...passed 00:12:45.024 Test: blockdev write read invalid size ...passed 00:12:45.024 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.024 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.024 Test: blockdev write read max offset ...passed 00:12:45.024 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.024 Test: blockdev writev readv 8 blocks ...passed 00:12:45.024 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.024 Test: blockdev writev readv block ...passed 00:12:45.024 Test: blockdev writev readv size > 128k ...passed 00:12:45.024 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.024 Test: blockdev comparev and writev ...[2024-11-06 13:38:38.925301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7e04000 len:0x1000 00:12:45.024 [2024-11-06 13:38:38.925476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:45.024 passed 00:12:45.024 Test: blockdev nvme passthru rw ...passed 00:12:45.024 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:38:38.926274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:45.024 [2024-11-06 13:38:38.926395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:45.024 passed 00:12:45.024 Test: blockdev nvme admin passthru ...passed 00:12:45.024 Test: blockdev copy ...passed 00:12:45.024 Suite: bdevio tests on: Nvme2n3 00:12:45.024 Test: blockdev write read block ...passed 00:12:45.024 Test: blockdev write zeroes read block ...passed 00:12:45.024 Test: blockdev write zeroes read no split ...passed 00:12:45.024 Test: blockdev write zeroes read split ...passed 00:12:45.282 Test: blockdev write zeroes read split partial ...passed 00:12:45.282 Test: blockdev reset ...[2024-11-06 13:38:39.008803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:45.282 [2024-11-06 13:38:39.013597] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:45.282 passed 00:12:45.282 Test: blockdev write read 8 blocks ...passed 00:12:45.282 Test: blockdev write read size > 128k ...passed 00:12:45.282 Test: blockdev write read invalid size ...passed 00:12:45.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.282 Test: blockdev write read max offset ...passed 00:12:45.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.282 Test: blockdev writev readv 8 blocks ...passed 00:12:45.282 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.282 Test: blockdev writev readv block ...passed 00:12:45.282 Test: blockdev writev readv size > 128k ...passed 00:12:45.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.282 Test: blockdev comparev and writev ...[2024-11-06 13:38:39.021941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7e02000 len:0x1000 00:12:45.282 [2024-11-06 13:38:39.022110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:45.282 passed 00:12:45.282 Test: blockdev nvme passthru rw ...passed 00:12:45.282 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:38:39.023013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:45.282 passed 00:12:45.282 Test: blockdev nvme admin passthru ...[2024-11-06 13:38:39.023147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:45.282 passed 00:12:45.282 Test: blockdev copy ...passed 00:12:45.282 Suite: bdevio tests on: Nvme2n2 00:12:45.282 Test: blockdev write read block ...passed 00:12:45.282 Test: blockdev write zeroes read block ...passed 00:12:45.282 Test: blockdev write zeroes read no split ...passed 00:12:45.282 Test: blockdev write zeroes read split ...passed 00:12:45.282 Test: blockdev write zeroes read split partial ...passed 00:12:45.282 Test: blockdev reset ...[2024-11-06 13:38:39.104045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:45.282 [2024-11-06 13:38:39.108444] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:45.282 passed 00:12:45.282 Test: blockdev write read 8 blocks ...passed 00:12:45.282 Test: blockdev write read size > 128k ...passed 00:12:45.282 Test: blockdev write read invalid size ...passed 00:12:45.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.282 Test: blockdev write read max offset ...passed 00:12:45.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.282 Test: blockdev writev readv 8 blocks ...passed 00:12:45.282 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.282 Test: blockdev writev readv block ...passed 00:12:45.282 Test: blockdev writev readv size > 128k ...passed 00:12:45.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.282 Test: blockdev comparev and writev ...[2024-11-06 13:38:39.117140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc38000 len:0x1000 00:12:45.282 [2024-11-06 13:38:39.117295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:45.282 passed 00:12:45.282 Test: blockdev nvme passthru rw ...passed 00:12:45.282 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:38:39.118183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:45.282 [2024-11-06 13:38:39.118302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:45.282 passed 00:12:45.282 Test: blockdev nvme admin passthru ...passed 00:12:45.282 Test: blockdev copy ...passed 00:12:45.282 Suite: bdevio tests on: Nvme2n1 00:12:45.282 Test: blockdev write read block ...passed 00:12:45.282 Test: blockdev write zeroes read block ...passed 00:12:45.282 Test: blockdev write zeroes read no split ...passed 00:12:45.282 Test: blockdev write zeroes read split ...passed 00:12:45.282 Test: blockdev write zeroes read split partial ...passed 00:12:45.282 Test: blockdev reset ...[2024-11-06 13:38:39.195881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:45.282 [2024-11-06 13:38:39.200368] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:45.282 passed 00:12:45.282 Test: blockdev write read 8 blocks ...passed 00:12:45.282 Test: blockdev write read size > 128k ...passed 00:12:45.282 Test: blockdev write read invalid size ...passed 00:12:45.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.283 Test: blockdev write read max offset ...passed 00:12:45.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.283 Test: blockdev writev readv 8 blocks ...passed 00:12:45.283 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.283 Test: blockdev writev readv block ...passed 00:12:45.283 Test: blockdev writev readv size > 128k ...passed 00:12:45.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.283 Test: blockdev comparev and writev ...[2024-11-06 13:38:39.208952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc34000 len:0x1000 00:12:45.283 [2024-11-06 13:38:39.209005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:45.283 passed 00:12:45.283 Test: blockdev nvme passthru rw ...passed 00:12:45.283 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:38:39.209753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:45.283 [2024-11-06 13:38:39.209802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:45.283 passed 00:12:45.283 Test: blockdev nvme admin passthru ...passed 00:12:45.283 Test: blockdev copy ...passed 00:12:45.283 Suite: bdevio tests on: Nvme1n1p2 00:12:45.283 Test: blockdev write read block ...passed 00:12:45.283 Test: blockdev write zeroes read block ...passed 00:12:45.283 Test: blockdev write zeroes read no split ...passed 00:12:45.283 Test: blockdev write zeroes read split ...passed 00:12:45.541 Test: blockdev write zeroes read split partial ...passed 00:12:45.541 Test: blockdev reset ...[2024-11-06 13:38:39.289587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:45.541 [2024-11-06 13:38:39.293825] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:45.541 passed 00:12:45.541 Test: blockdev write read 8 blocks ...passed 00:12:45.541 Test: blockdev write read size > 128k ...passed 00:12:45.541 Test: blockdev write read invalid size ...passed 00:12:45.541 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.541 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.541 Test: blockdev write read max offset ...passed 00:12:45.541 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.541 Test: blockdev writev readv 8 blocks ...passed 00:12:45.541 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.541 Test: blockdev writev readv block ...passed 00:12:45.541 Test: blockdev writev readv size > 128k ...passed 00:12:45.541 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.541 Test: blockdev comparev and writev ...[2024-11-06 13:38:39.302600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cbc30000 len:0x1000 00:12:45.541 [2024-11-06 13:38:39.302650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:45.541 passed 00:12:45.541 Test: blockdev nvme passthru rw ...passed 00:12:45.541 Test: blockdev nvme passthru vendor specific ...passed 00:12:45.541 Test: blockdev nvme admin passthru ...passed 00:12:45.541 Test: blockdev copy ...passed 00:12:45.541 Suite: bdevio tests on: Nvme1n1p1 00:12:45.541 Test: blockdev write read block ...passed 00:12:45.541 Test: blockdev write zeroes read block ...passed 00:12:45.541 Test: blockdev write zeroes read no split ...passed 00:12:45.541 Test: blockdev write zeroes read split ...passed 00:12:45.541 Test: blockdev write zeroes read split partial ...passed 00:12:45.541 Test: blockdev reset ...[2024-11-06 13:38:39.376821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:45.541 [2024-11-06 13:38:39.380901] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:45.541 passed 00:12:45.541 Test: blockdev write read 8 blocks ...passed 00:12:45.541 Test: blockdev write read size > 128k ...passed 00:12:45.541 Test: blockdev write read invalid size ...passed 00:12:45.541 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.541 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.541 Test: blockdev write read max offset ...passed 00:12:45.541 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.541 Test: blockdev writev readv 8 blocks ...passed 00:12:45.541 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.541 Test: blockdev writev readv block ...passed 00:12:45.541 Test: blockdev writev readv size > 128k ...passed 00:12:45.541 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.542 Test: blockdev comparev and writev ...[2024-11-06 13:38:39.389446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b800e000 len:0x1000 00:12:45.542 [2024-11-06 13:38:39.389500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:45.542 passed 00:12:45.542 Test: blockdev nvme passthru rw ...passed 00:12:45.542 Test: blockdev nvme passthru vendor specific ...passed 00:12:45.542 Test: blockdev nvme admin passthru ...passed 00:12:45.542 Test: blockdev copy ...passed 00:12:45.542 Suite: bdevio tests on: Nvme0n1 00:12:45.542 Test: blockdev write read block ...passed 00:12:45.542 Test: blockdev write zeroes read block ...passed 00:12:45.542 Test: blockdev write zeroes read no split ...passed 00:12:45.542 Test: blockdev write zeroes read split ...passed 00:12:45.542 Test: blockdev write zeroes read split partial ...passed 00:12:45.542 Test: blockdev reset ...[2024-11-06 13:38:39.465246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:45.542 [2024-11-06 13:38:39.469583] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:45.542 passed 00:12:45.542 Test: blockdev write read 8 blocks ...passed 00:12:45.542 Test: blockdev write read size > 128k ...passed 00:12:45.542 Test: blockdev write read invalid size ...passed 00:12:45.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.542 Test: blockdev write read max offset ...passed 00:12:45.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:45.542 Test: blockdev writev readv 8 blocks ...passed 00:12:45.542 Test: blockdev writev readv 30 x 1block ...passed 00:12:45.542 Test: blockdev writev readv block ...passed 00:12:45.542 Test: blockdev writev readv size > 128k ...passed 00:12:45.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:45.542 Test: blockdev comparev and writev ...passed 00:12:45.542 Test: blockdev nvme passthru rw ...[2024-11-06 13:38:39.477048] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:45.542 separate metadata which is not supported yet. 00:12:45.542 passed 00:12:45.542 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:38:39.477483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:45.542 [2024-11-06 13:38:39.477529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:45.542 passed 00:12:45.542 Test: blockdev nvme admin passthru ...passed 00:12:45.542 Test: blockdev copy ...passed 00:12:45.542 00:12:45.542 Run Summary: Type Total Ran Passed Failed Inactive 00:12:45.542 suites 7 7 n/a 0 0 00:12:45.542 tests 161 161 161 0 0 00:12:45.542 asserts 1025 1025 1025 0 n/a 00:12:45.542 00:12:45.542 Elapsed time = 1.762 seconds 00:12:45.542 0 00:12:45.542 13:38:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62979 00:12:45.542 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 62979 ']' 00:12:45.542 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 62979 00:12:45.542 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:12:45.542 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:45.542 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62979 00:12:45.800 killing process with pid 62979 00:12:45.801 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:45.801 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:45.801 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62979' 00:12:45.801 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 62979 00:12:45.801 13:38:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 62979 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:47.175 00:12:47.175 real 0m3.181s 00:12:47.175 user 0m8.365s 00:12:47.175 sys 0m0.426s 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:47.175 ************************************ 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:47.175 END TEST bdev_bounds 00:12:47.175 ************************************ 00:12:47.175 13:38:40 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:47.175 13:38:40 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:47.175 13:38:40 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:47.175 13:38:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:47.175 ************************************ 00:12:47.175 START TEST bdev_nbd 00:12:47.175 ************************************ 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63044 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63044 /var/tmp/spdk-nbd.sock 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 63044 ']' 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:47.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:47.175 13:38:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:47.175 [2024-11-06 13:38:40.920595] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:12:47.175 [2024-11-06 13:38:40.921520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.175 [2024-11-06 13:38:41.128768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.433 [2024-11-06 13:38:41.302549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:48.369 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.628 1+0 records in 00:12:48.628 1+0 records out 00:12:48.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459836 s, 8.9 MB/s 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:48.628 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.887 1+0 records in 00:12:48.887 1+0 records out 00:12:48.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584528 s, 7.0 MB/s 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:48.887 13:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.146 1+0 records in 00:12:49.146 1+0 records out 00:12:49.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475247 s, 8.6 MB/s 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:49.146 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.712 1+0 records in 00:12:49.712 1+0 records out 00:12:49.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736385 s, 5.6 MB/s 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:49.712 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.971 1+0 records in 00:12:49.971 1+0 records out 00:12:49.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573117 s, 7.1 MB/s 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:49.971 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:50.230 13:38:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.230 1+0 records in 00:12:50.230 1+0 records out 00:12:50.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598766 s, 6.8 MB/s 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:50.230 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.489 1+0 records in 00:12:50.489 1+0 records out 00:12:50.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673654 s, 6.1 MB/s 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:50.489 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.748 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd0", 00:12:50.748 "bdev_name": "Nvme0n1" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd1", 00:12:50.748 "bdev_name": "Nvme1n1p1" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd2", 00:12:50.748 "bdev_name": "Nvme1n1p2" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd3", 00:12:50.748 "bdev_name": "Nvme2n1" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd4", 00:12:50.748 "bdev_name": "Nvme2n2" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd5", 00:12:50.748 "bdev_name": "Nvme2n3" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd6", 00:12:50.748 "bdev_name": "Nvme3n1" 00:12:50.748 } 00:12:50.748 ]' 00:12:50.748 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:50.748 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd0", 00:12:50.748 "bdev_name": "Nvme0n1" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd1", 00:12:50.748 "bdev_name": "Nvme1n1p1" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd2", 00:12:50.748 "bdev_name": "Nvme1n1p2" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd3", 00:12:50.748 "bdev_name": "Nvme2n1" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd4", 00:12:50.748 "bdev_name": "Nvme2n2" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd5", 00:12:50.748 "bdev_name": "Nvme2n3" 00:12:50.748 }, 00:12:50.748 { 00:12:50.748 "nbd_device": "/dev/nbd6", 00:12:50.748 "bdev_name": "Nvme3n1" 00:12:50.748 } 00:12:50.748 ]' 00:12:50.748 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:50.748 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:12:50.748 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.748 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:12:50.749 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.749 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:50.749 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.749 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.008 13:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.267 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.525 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.784 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:52.042 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:52.042 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:52.042 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:52.042 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.043 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.043 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:52.043 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.043 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.043 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.043 13:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:52.301 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:52.301 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:52.301 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.559 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:53.127 13:38:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:53.387 /dev/nbd0 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.387 1+0 records in 00:12:53.387 1+0 records out 00:12:53.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483339 s, 8.5 MB/s 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:53.387 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:12:53.646 /dev/nbd1 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.646 1+0 records in 00:12:53.646 1+0 records out 00:12:53.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538237 s, 7.6 MB/s 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:53.646 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:12:53.904 /dev/nbd10 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.904 1+0 records in 00:12:53.904 1+0 records out 00:12:53.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556281 s, 7.4 MB/s 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:53.904 13:38:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:12:54.163 /dev/nbd11 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.163 1+0 records in 00:12:54.163 1+0 records out 00:12:54.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048345 s, 8.5 MB/s 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:54.163 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:12:54.422 /dev/nbd12 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.422 1+0 records in 00:12:54.422 1+0 records out 00:12:54.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073086 s, 5.6 MB/s 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:54.422 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:12:54.681 /dev/nbd13 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.681 1+0 records in 00:12:54.681 1+0 records out 00:12:54.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599085 s, 6.8 MB/s 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:54.681 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:12:55.248 /dev/nbd14 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.248 1+0 records in 00:12:55.248 1+0 records out 00:12:55.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107864 s, 3.8 MB/s 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:55.248 13:38:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:55.248 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.248 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:55.248 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:55.248 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.248 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.506 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:55.506 { 00:12:55.506 "nbd_device": "/dev/nbd0", 00:12:55.507 "bdev_name": "Nvme0n1" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd1", 00:12:55.507 "bdev_name": "Nvme1n1p1" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd10", 00:12:55.507 "bdev_name": "Nvme1n1p2" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd11", 00:12:55.507 "bdev_name": "Nvme2n1" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd12", 00:12:55.507 "bdev_name": "Nvme2n2" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd13", 00:12:55.507 "bdev_name": "Nvme2n3" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd14", 00:12:55.507 "bdev_name": "Nvme3n1" 00:12:55.507 } 00:12:55.507 ]' 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd0", 00:12:55.507 "bdev_name": "Nvme0n1" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd1", 00:12:55.507 "bdev_name": "Nvme1n1p1" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd10", 00:12:55.507 "bdev_name": "Nvme1n1p2" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd11", 00:12:55.507 "bdev_name": "Nvme2n1" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd12", 00:12:55.507 "bdev_name": "Nvme2n2" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd13", 00:12:55.507 "bdev_name": "Nvme2n3" 00:12:55.507 }, 00:12:55.507 { 00:12:55.507 "nbd_device": "/dev/nbd14", 00:12:55.507 "bdev_name": "Nvme3n1" 00:12:55.507 } 00:12:55.507 ]' 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:55.507 /dev/nbd1 00:12:55.507 /dev/nbd10 00:12:55.507 /dev/nbd11 00:12:55.507 /dev/nbd12 00:12:55.507 /dev/nbd13 00:12:55.507 /dev/nbd14' 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:55.507 /dev/nbd1 00:12:55.507 /dev/nbd10 00:12:55.507 /dev/nbd11 00:12:55.507 /dev/nbd12 00:12:55.507 /dev/nbd13 00:12:55.507 /dev/nbd14' 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:55.507 256+0 records in 00:12:55.507 256+0 records out 00:12:55.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00811314 s, 129 MB/s 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.507 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:55.766 256+0 records in 00:12:55.766 256+0 records out 00:12:55.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144636 s, 7.2 MB/s 00:12:55.766 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.766 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:55.766 256+0 records in 00:12:55.766 256+0 records out 00:12:55.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147409 s, 7.1 MB/s 00:12:55.766 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.766 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:56.024 256+0 records in 00:12:56.024 256+0 records out 00:12:56.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148234 s, 7.1 MB/s 00:12:56.024 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.024 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:56.024 256+0 records in 00:12:56.024 256+0 records out 00:12:56.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139469 s, 7.5 MB/s 00:12:56.024 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.024 13:38:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:56.282 256+0 records in 00:12:56.282 256+0 records out 00:12:56.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14672 s, 7.1 MB/s 00:12:56.282 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.282 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:56.540 256+0 records in 00:12:56.540 256+0 records out 00:12:56.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146683 s, 7.1 MB/s 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:56.540 256+0 records in 00:12:56.540 256+0 records out 00:12:56.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148682 s, 7.1 MB/s 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:56.540 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.541 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.806 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.077 13:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.335 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:57.595 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.854 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.113 13:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.372 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:58.631 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:58.631 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:58.631 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:58.631 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.631 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.632 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:58.632 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.632 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.632 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:58.632 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.632 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:58.890 13:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:59.149 malloc_lvol_verify 00:12:59.149 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:59.408 96c142f6-0b64-421a-8ef1-29685edc3a79 00:12:59.408 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:59.667 c3d9df0a-328c-4add-bccf-92bcfe1bba6e 00:12:59.667 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:59.926 /dev/nbd0 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:59.926 mke2fs 1.47.0 (5-Feb-2023) 00:12:59.926 Discarding device blocks: 0/4096 done 00:12:59.926 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:59.926 00:12:59.926 Allocating group tables: 0/1 done 00:12:59.926 Writing inode tables: 0/1 done 00:12:59.926 Creating journal (1024 blocks): done 00:12:59.926 Writing superblocks and filesystem accounting information: 0/1 done 00:12:59.926 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.926 13:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63044 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 63044 ']' 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 63044 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63044 00:13:00.185 killing process with pid 63044 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63044' 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 63044 00:13:00.185 13:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 63044 00:13:01.565 13:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:01.565 ************************************ 00:13:01.565 END TEST bdev_nbd 00:13:01.565 ************************************ 00:13:01.565 00:13:01.565 real 0m14.666s 00:13:01.565 user 0m19.698s 00:13:01.565 sys 0m5.921s 00:13:01.565 13:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.565 13:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:01.565 13:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:13:01.565 13:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:13:01.565 skipping fio tests on NVMe due to multi-ns failures. 00:13:01.565 13:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:13:01.565 13:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:01.565 13:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:01.565 13:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:01.565 13:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:01.565 13:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.565 13:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:01.565 ************************************ 00:13:01.565 START TEST bdev_verify 00:13:01.565 ************************************ 00:13:01.565 13:38:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:01.824 [2024-11-06 13:38:55.648106] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:13:01.824 [2024-11-06 13:38:55.648298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63498 ] 00:13:02.083 [2024-11-06 13:38:55.854065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:02.083 [2024-11-06 13:38:55.985109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.083 [2024-11-06 13:38:55.985137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.033 Running I/O for 5 seconds... 00:13:05.341 18304.00 IOPS, 71.50 MiB/s [2024-11-06T13:39:00.257Z] 18080.00 IOPS, 70.62 MiB/s [2024-11-06T13:39:01.191Z] 18453.33 IOPS, 72.08 MiB/s [2024-11-06T13:39:02.156Z] 18368.00 IOPS, 71.75 MiB/s [2024-11-06T13:39:02.156Z] 18508.80 IOPS, 72.30 MiB/s 00:13:08.173 Latency(us) 00:13:08.173 [2024-11-06T13:39:02.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.173 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x0 length 0xbd0bd 00:13:08.173 Nvme0n1 : 5.09 1334.08 5.21 0.00 0.00 95720.48 18599.74 88379.98 00:13:08.173 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:08.173 Nvme0n1 : 5.11 1277.78 4.99 0.00 0.00 99924.31 21346.01 95370.48 00:13:08.173 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x0 length 0x4ff80 00:13:08.173 Nvme1n1p1 : 5.09 1333.66 5.21 0.00 0.00 95603.15 18849.40 85384.05 00:13:08.173 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:08.173 Nvme1n1p1 : 5.11 1277.30 4.99 0.00 0.00 99739.55 18225.25 90377.26 00:13:08.173 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x0 length 0x4ff7f 00:13:08.173 Nvme1n1p2 : 5.09 1333.27 5.21 0.00 0.00 95429.33 19099.06 82388.11 00:13:08.173 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:08.173 Nvme1n1p2 : 5.11 1276.81 4.99 0.00 0.00 99595.68 17850.76 86382.69 00:13:08.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x0 length 0x80000 00:13:08.173 Nvme2n1 : 5.09 1332.90 5.21 0.00 0.00 95277.30 19099.06 78892.86 00:13:08.173 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x80000 length 0x80000 00:13:08.173 Nvme2n1 : 5.11 1276.34 4.99 0.00 0.00 99404.52 18350.08 86382.69 00:13:08.173 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x0 length 0x80000 00:13:08.173 Nvme2n2 : 5.09 1332.54 5.21 0.00 0.00 95107.09 19099.06 77394.90 00:13:08.173 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x80000 length 0x80000 00:13:08.173 Nvme2n2 : 5.12 1275.89 4.98 0.00 0.00 99223.06 18599.74 91375.91 00:13:08.173 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x0 length 0x80000 00:13:08.173 Nvme2n3 : 5.09 1332.14 5.20 0.00 0.00 94940.89 18599.74 81389.47 00:13:08.173 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x80000 length 0x80000 00:13:08.173 Nvme2n3 : 5.12 1275.45 4.98 0.00 0.00 99032.55 18350.08 92374.55 00:13:08.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x0 length 0x20000 00:13:08.173 Nvme3n1 : 5.09 1331.77 5.20 0.00 0.00 94773.99 13232.03 85384.05 00:13:08.173 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:08.173 Verification LBA range: start 0x20000 length 0x20000 00:13:08.173 Nvme3n1 : 5.12 1275.00 4.98 0.00 0.00 98883.43 13356.86 94871.16 00:13:08.173 [2024-11-06T13:39:02.156Z] =================================================================================================================== 00:13:08.173 [2024-11-06T13:39:02.156Z] Total : 18264.93 71.35 0.00 0.00 97292.76 13232.03 95370.48 00:13:09.550 00:13:09.550 real 0m7.941s 00:13:09.550 user 0m14.588s 00:13:09.550 sys 0m0.350s 00:13:09.550 13:39:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:09.550 13:39:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:09.550 ************************************ 00:13:09.550 END TEST bdev_verify 00:13:09.550 ************************************ 00:13:09.550 13:39:03 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:09.550 13:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:09.550 13:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:09.550 13:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:09.550 ************************************ 00:13:09.550 START TEST bdev_verify_big_io 00:13:09.550 ************************************ 00:13:09.550 13:39:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:09.809 [2024-11-06 13:39:03.639267] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:13:09.809 [2024-11-06 13:39:03.639451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63607 ] 00:13:10.067 [2024-11-06 13:39:03.833127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:10.067 [2024-11-06 13:39:03.959885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.067 [2024-11-06 13:39:03.959907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.003 Running I/O for 5 seconds... 00:13:16.840 1388.00 IOPS, 86.75 MiB/s [2024-11-06T13:39:11.081Z] 3041.50 IOPS, 190.09 MiB/s [2024-11-06T13:39:11.081Z] 3321.67 IOPS, 207.60 MiB/s 00:13:17.098 Latency(us) 00:13:17.098 [2024-11-06T13:39:11.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.098 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x0 length 0xbd0b 00:13:17.098 Nvme0n1 : 5.79 120.68 7.54 0.00 0.00 1020710.50 20222.54 1010627.54 00:13:17.098 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:17.098 Nvme0n1 : 5.79 114.51 7.16 0.00 0.00 1072008.31 11172.33 1470003.69 00:13:17.098 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x0 length 0x4ff8 00:13:17.098 Nvme1n1p1 : 5.79 120.13 7.51 0.00 0.00 992939.99 64911.85 1030600.41 00:13:17.098 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x4ff8 length 0x4ff8 00:13:17.098 Nvme1n1p1 : 5.89 117.42 7.34 0.00 0.00 1021184.56 26214.40 1501960.29 00:13:17.098 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x0 length 0x4ff7 00:13:17.098 Nvme1n1p2 : 5.84 125.61 7.85 0.00 0.00 940057.16 71902.35 998643.81 00:13:17.098 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x4ff7 length 0x4ff7 00:13:17.098 Nvme1n1p2 : 5.89 117.33 7.33 0.00 0.00 993317.83 44689.31 1517938.59 00:13:17.098 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x0 length 0x8000 00:13:17.098 Nvme2n1 : 5.84 125.75 7.86 0.00 0.00 912728.51 72901.00 1006632.96 00:13:17.098 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x8000 length 0x8000 00:13:17.098 Nvme2n1 : 5.89 120.66 7.54 0.00 0.00 949039.41 58919.98 1334188.13 00:13:17.098 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x0 length 0x8000 00:13:17.098 Nvme2n2 : 5.89 130.78 8.17 0.00 0.00 858764.29 45438.29 1014622.11 00:13:17.098 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x8000 length 0x8000 00:13:17.098 Nvme2n2 : 5.90 122.11 7.63 0.00 0.00 913827.05 74398.96 1557884.34 00:13:17.098 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x0 length 0x8000 00:13:17.098 Nvme2n3 : 5.89 135.62 8.48 0.00 0.00 810215.56 40445.07 1018616.69 00:13:17.098 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x8000 length 0x8000 00:13:17.098 Nvme2n3 : 5.95 131.49 8.22 0.00 0.00 830513.44 23093.64 1581851.79 00:13:17.098 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x0 length 0x2000 00:13:17.098 Nvme3n1 : 5.94 150.97 9.44 0.00 0.00 710607.92 8987.79 1046578.71 00:13:17.098 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:17.098 Verification LBA range: start 0x2000 length 0x2000 00:13:17.098 Nvme3n1 : 5.99 146.38 9.15 0.00 0.00 728134.94 6959.30 1605819.25 00:13:17.098 [2024-11-06T13:39:11.081Z] =================================================================================================================== 00:13:17.098 [2024-11-06T13:39:11.081Z] Total : 1779.45 111.22 0.00 0.00 901573.56 6959.30 1605819.25 00:13:19.056 00:13:19.056 real 0m9.424s 00:13:19.056 user 0m17.549s 00:13:19.056 sys 0m0.361s 00:13:19.056 13:39:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.056 13:39:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.056 ************************************ 00:13:19.056 END TEST bdev_verify_big_io 00:13:19.056 ************************************ 00:13:19.056 13:39:12 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.056 13:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:19.056 13:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.056 13:39:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:19.056 ************************************ 00:13:19.056 START TEST bdev_write_zeroes 00:13:19.056 ************************************ 00:13:19.056 13:39:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.314 [2024-11-06 13:39:13.125497] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:13:19.314 [2024-11-06 13:39:13.125672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63722 ] 00:13:19.572 [2024-11-06 13:39:13.321562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.572 [2024-11-06 13:39:13.450888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.506 Running I/O for 1 seconds... 00:13:21.440 51904.00 IOPS, 202.75 MiB/s 00:13:21.440 Latency(us) 00:13:21.440 [2024-11-06T13:39:15.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.440 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.440 Nvme0n1 : 1.02 7430.96 29.03 0.00 0.00 17172.11 9237.46 29709.65 00:13:21.440 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.440 Nvme1n1p1 : 1.03 7421.63 28.99 0.00 0.00 17166.22 13606.52 30708.30 00:13:21.440 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.440 Nvme1n1p2 : 1.03 7413.15 28.96 0.00 0.00 17077.28 13169.62 26963.38 00:13:21.440 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.440 Nvme2n1 : 1.03 7405.38 28.93 0.00 0.00 17047.19 12483.05 25215.76 00:13:21.440 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.441 Nvme2n2 : 1.03 7452.32 29.11 0.00 0.00 16937.66 7708.28 23967.45 00:13:21.441 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.441 Nvme2n3 : 1.03 7445.45 29.08 0.00 0.00 16907.37 7895.53 25590.25 00:13:21.441 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.441 Nvme3n1 : 1.03 7376.32 28.81 0.00 0.00 17023.13 7989.15 27088.21 00:13:21.441 [2024-11-06T13:39:15.424Z] =================================================================================================================== 00:13:21.441 [2024-11-06T13:39:15.424Z] Total : 51945.23 202.91 0.00 0.00 17046.98 7708.28 30708.30 00:13:22.817 00:13:22.817 real 0m3.543s 00:13:22.817 user 0m3.109s 00:13:22.817 sys 0m0.315s 00:13:22.817 13:39:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:22.817 ************************************ 00:13:22.817 END TEST bdev_write_zeroes 00:13:22.817 ************************************ 00:13:22.817 13:39:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:22.817 13:39:16 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.817 13:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:22.817 13:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:22.817 13:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:22.817 ************************************ 00:13:22.817 START TEST bdev_json_nonenclosed 00:13:22.817 ************************************ 00:13:22.817 13:39:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.817 [2024-11-06 13:39:16.700070] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:13:22.817 [2024-11-06 13:39:16.700195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63780 ] 00:13:23.076 [2024-11-06 13:39:16.871381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.076 [2024-11-06 13:39:16.995759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.076 [2024-11-06 13:39:16.995859] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:23.076 [2024-11-06 13:39:16.995884] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:23.076 [2024-11-06 13:39:16.995898] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:23.334 00:13:23.334 real 0m0.659s 00:13:23.334 user 0m0.416s 00:13:23.334 sys 0m0.139s 00:13:23.334 13:39:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.334 13:39:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:23.334 ************************************ 00:13:23.334 END TEST bdev_json_nonenclosed 00:13:23.334 ************************************ 00:13:23.334 13:39:17 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.334 13:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:23.334 13:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.334 13:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:23.600 ************************************ 00:13:23.600 START TEST bdev_json_nonarray 00:13:23.600 ************************************ 00:13:23.600 13:39:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.600 [2024-11-06 13:39:17.413218] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:13:23.600 [2024-11-06 13:39:17.413342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63806 ] 00:13:23.913 [2024-11-06 13:39:17.590183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.913 [2024-11-06 13:39:17.716017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.913 [2024-11-06 13:39:17.716141] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:23.913 [2024-11-06 13:39:17.716166] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:23.913 [2024-11-06 13:39:17.716179] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:24.171 00:13:24.171 real 0m0.671s 00:13:24.171 user 0m0.423s 00:13:24.171 sys 0m0.144s 00:13:24.171 13:39:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:24.171 13:39:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:24.171 ************************************ 00:13:24.172 END TEST bdev_json_nonarray 00:13:24.172 ************************************ 00:13:24.172 13:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:13:24.172 13:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:13:24.172 13:39:18 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:13:24.172 13:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:24.172 13:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.172 13:39:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.172 ************************************ 00:13:24.172 START TEST bdev_gpt_uuid 00:13:24.172 ************************************ 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63837 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63837 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63837 ']' 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.172 13:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:24.429 [2024-11-06 13:39:18.221130] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:13:24.429 [2024-11-06 13:39:18.221307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63837 ] 00:13:24.687 [2024-11-06 13:39:18.435951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.687 [2024-11-06 13:39:18.596640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.623 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:25.623 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:13:25.623 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:25.623 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.623 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:26.189 Some configs were skipped because the RPC state that can call them passed over. 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:13:26.189 { 00:13:26.189 "name": "Nvme1n1p1", 00:13:26.189 "aliases": [ 00:13:26.189 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:13:26.189 ], 00:13:26.189 "product_name": "GPT Disk", 00:13:26.189 "block_size": 4096, 00:13:26.189 "num_blocks": 655104, 00:13:26.189 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:26.189 "assigned_rate_limits": { 00:13:26.189 "rw_ios_per_sec": 0, 00:13:26.189 "rw_mbytes_per_sec": 0, 00:13:26.189 "r_mbytes_per_sec": 0, 00:13:26.189 "w_mbytes_per_sec": 0 00:13:26.189 }, 00:13:26.189 "claimed": false, 00:13:26.189 "zoned": false, 00:13:26.189 "supported_io_types": { 00:13:26.189 "read": true, 00:13:26.189 "write": true, 00:13:26.189 "unmap": true, 00:13:26.189 "flush": true, 00:13:26.189 "reset": true, 00:13:26.189 "nvme_admin": false, 00:13:26.189 "nvme_io": false, 00:13:26.189 "nvme_io_md": false, 00:13:26.189 "write_zeroes": true, 00:13:26.189 "zcopy": false, 00:13:26.189 "get_zone_info": false, 00:13:26.189 "zone_management": false, 00:13:26.189 "zone_append": false, 00:13:26.189 "compare": true, 00:13:26.189 "compare_and_write": false, 00:13:26.189 "abort": true, 00:13:26.189 "seek_hole": false, 00:13:26.189 "seek_data": false, 00:13:26.189 "copy": true, 00:13:26.189 "nvme_iov_md": false 00:13:26.189 }, 00:13:26.189 "driver_specific": { 00:13:26.189 "gpt": { 00:13:26.189 "base_bdev": "Nvme1n1", 00:13:26.189 "offset_blocks": 256, 00:13:26.189 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:13:26.189 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:26.189 "partition_name": "SPDK_TEST_first" 00:13:26.189 } 00:13:26.189 } 00:13:26.189 } 00:13:26.189 ]' 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:13:26.189 13:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:13:26.189 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:26.189 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:26.189 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:26.189 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:26.189 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:13:26.190 { 00:13:26.190 "name": "Nvme1n1p2", 00:13:26.190 "aliases": [ 00:13:26.190 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:13:26.190 ], 00:13:26.190 "product_name": "GPT Disk", 00:13:26.190 "block_size": 4096, 00:13:26.190 "num_blocks": 655103, 00:13:26.190 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:26.190 "assigned_rate_limits": { 00:13:26.190 "rw_ios_per_sec": 0, 00:13:26.190 "rw_mbytes_per_sec": 0, 00:13:26.190 "r_mbytes_per_sec": 0, 00:13:26.190 "w_mbytes_per_sec": 0 00:13:26.190 }, 00:13:26.190 "claimed": false, 00:13:26.190 "zoned": false, 00:13:26.190 "supported_io_types": { 00:13:26.190 "read": true, 00:13:26.190 "write": true, 00:13:26.190 "unmap": true, 00:13:26.190 "flush": true, 00:13:26.190 "reset": true, 00:13:26.190 "nvme_admin": false, 00:13:26.190 "nvme_io": false, 00:13:26.190 "nvme_io_md": false, 00:13:26.190 "write_zeroes": true, 00:13:26.190 "zcopy": false, 00:13:26.190 "get_zone_info": false, 00:13:26.190 "zone_management": false, 00:13:26.190 "zone_append": false, 00:13:26.190 "compare": true, 00:13:26.190 "compare_and_write": false, 00:13:26.190 "abort": true, 00:13:26.190 "seek_hole": false, 00:13:26.190 "seek_data": false, 00:13:26.190 "copy": true, 00:13:26.190 "nvme_iov_md": false 00:13:26.190 }, 00:13:26.190 "driver_specific": { 00:13:26.190 "gpt": { 00:13:26.190 "base_bdev": "Nvme1n1", 00:13:26.190 "offset_blocks": 655360, 00:13:26.190 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:13:26.190 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:26.190 "partition_name": "SPDK_TEST_second" 00:13:26.190 } 00:13:26.190 } 00:13:26.190 } 00:13:26.190 ]' 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:26.190 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63837 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63837 ']' 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63837 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63837 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63837' 00:13:26.449 killing process with pid 63837 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63837 00:13:26.449 13:39:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63837 00:13:28.979 00:13:28.979 real 0m4.748s 00:13:28.979 user 0m4.968s 00:13:28.979 sys 0m0.601s 00:13:28.979 13:39:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:28.979 13:39:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:28.979 ************************************ 00:13:28.979 END TEST bdev_gpt_uuid 00:13:28.979 ************************************ 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:13:28.979 13:39:22 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:29.547 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:29.547 Waiting for block devices as requested 00:13:29.849 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.849 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.849 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:30.125 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:35.395 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:35.395 13:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:13:35.395 13:39:28 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:13:35.395 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:35.395 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:35.395 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:35.395 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:35.395 13:39:29 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:13:35.395 00:13:35.395 real 1m8.771s 00:13:35.395 user 1m27.133s 00:13:35.395 sys 0m12.432s 00:13:35.395 13:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:35.395 ************************************ 00:13:35.395 END TEST blockdev_nvme_gpt 00:13:35.395 13:39:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:35.395 ************************************ 00:13:35.395 13:39:29 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:35.395 13:39:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:35.395 13:39:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:35.395 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:13:35.395 ************************************ 00:13:35.395 START TEST nvme 00:13:35.395 ************************************ 00:13:35.395 13:39:29 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:35.395 * Looking for test storage... 00:13:35.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:35.395 13:39:29 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:35.395 13:39:29 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:35.395 13:39:29 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:35.654 13:39:29 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.654 13:39:29 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.654 13:39:29 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.654 13:39:29 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.654 13:39:29 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.654 13:39:29 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.654 13:39:29 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.654 13:39:29 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.654 13:39:29 nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:35.654 13:39:29 nvme -- scripts/common.sh@345 -- # : 1 00:13:35.654 13:39:29 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.654 13:39:29 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.654 13:39:29 nvme -- scripts/common.sh@365 -- # decimal 1 00:13:35.654 13:39:29 nvme -- scripts/common.sh@353 -- # local d=1 00:13:35.654 13:39:29 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.654 13:39:29 nvme -- scripts/common.sh@355 -- # echo 1 00:13:35.654 13:39:29 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.654 13:39:29 nvme -- scripts/common.sh@366 -- # decimal 2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@353 -- # local d=2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.654 13:39:29 nvme -- scripts/common.sh@355 -- # echo 2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.654 13:39:29 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.654 13:39:29 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.654 13:39:29 nvme -- scripts/common.sh@368 -- # return 0 00:13:35.654 13:39:29 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.654 13:39:29 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:35.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.654 --rc genhtml_branch_coverage=1 00:13:35.654 --rc genhtml_function_coverage=1 00:13:35.654 --rc genhtml_legend=1 00:13:35.654 --rc geninfo_all_blocks=1 00:13:35.654 --rc geninfo_unexecuted_blocks=1 00:13:35.654 00:13:35.654 ' 00:13:35.654 13:39:29 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:35.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.654 --rc genhtml_branch_coverage=1 00:13:35.654 --rc genhtml_function_coverage=1 00:13:35.654 --rc genhtml_legend=1 00:13:35.654 --rc geninfo_all_blocks=1 00:13:35.654 --rc geninfo_unexecuted_blocks=1 00:13:35.654 00:13:35.654 ' 00:13:35.654 13:39:29 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:35.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.654 --rc genhtml_branch_coverage=1 00:13:35.654 --rc genhtml_function_coverage=1 00:13:35.654 --rc genhtml_legend=1 00:13:35.654 --rc geninfo_all_blocks=1 00:13:35.654 --rc geninfo_unexecuted_blocks=1 00:13:35.654 00:13:35.654 ' 00:13:35.654 13:39:29 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:35.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.654 --rc genhtml_branch_coverage=1 00:13:35.654 --rc genhtml_function_coverage=1 00:13:35.654 --rc genhtml_legend=1 00:13:35.654 --rc geninfo_all_blocks=1 00:13:35.654 --rc geninfo_unexecuted_blocks=1 00:13:35.654 00:13:35.654 ' 00:13:35.654 13:39:29 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:36.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:36.848 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.848 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.848 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:37.108 13:39:30 nvme -- nvme/nvme.sh@79 -- # uname 00:13:37.108 13:39:30 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:13:37.108 13:39:30 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:13:37.108 13:39:30 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1073 -- # stubpid=64501 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:13:37.108 Waiting for stub to ready for secondary processes... 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64501 ]] 00:13:37.108 13:39:30 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:13:37.108 [2024-11-06 13:39:30.964132] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:13:37.108 [2024-11-06 13:39:30.964333] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:13:38.047 13:39:31 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:38.047 13:39:31 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64501 ]] 00:13:38.047 13:39:31 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:13:38.305 [2024-11-06 13:39:32.065634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.306 [2024-11-06 13:39:32.231912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.306 [2024-11-06 13:39:32.232096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.306 [2024-11-06 13:39:32.232229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.306 [2024-11-06 13:39:32.257306] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:13:38.306 [2024-11-06 13:39:32.257354] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:38.306 [2024-11-06 13:39:32.269556] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:38.306 [2024-11-06 13:39:32.269720] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:38.306 [2024-11-06 13:39:32.273538] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:38.306 [2024-11-06 13:39:32.273792] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:13:38.306 [2024-11-06 13:39:32.273892] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:13:38.306 [2024-11-06 13:39:32.277943] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:38.306 [2024-11-06 13:39:32.278197] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:13:38.306 [2024-11-06 13:39:32.278298] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:13:38.306 [2024-11-06 13:39:32.282563] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:38.306 [2024-11-06 13:39:32.282941] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:13:38.306 [2024-11-06 13:39:32.283063] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:13:38.306 [2024-11-06 13:39:32.283138] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:13:38.306 [2024-11-06 13:39:32.283228] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:13:39.243 13:39:32 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:39.243 done. 00:13:39.243 13:39:32 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:13:39.243 13:39:32 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:39.243 13:39:32 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:13:39.243 13:39:32 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.243 13:39:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.243 ************************************ 00:13:39.243 START TEST nvme_reset 00:13:39.243 ************************************ 00:13:39.243 13:39:32 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:39.502 Initializing NVMe Controllers 00:13:39.502 Skipping QEMU NVMe SSD at 0000:00:13.0 00:13:39.502 Skipping QEMU NVMe SSD at 0000:00:10.0 00:13:39.502 Skipping QEMU NVMe SSD at 0000:00:11.0 00:13:39.502 Skipping QEMU NVMe SSD at 0000:00:12.0 00:13:39.502 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:13:39.502 00:13:39.502 real 0m0.397s 00:13:39.502 user 0m0.148s 00:13:39.502 sys 0m0.203s 00:13:39.502 13:39:33 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:39.502 13:39:33 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:13:39.502 ************************************ 00:13:39.502 END TEST nvme_reset 00:13:39.502 ************************************ 00:13:39.502 13:39:33 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:13:39.502 13:39:33 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:39.502 13:39:33 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.502 13:39:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.502 ************************************ 00:13:39.502 START TEST nvme_identify 00:13:39.502 ************************************ 00:13:39.502 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:13:39.502 13:39:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:13:39.502 13:39:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:13:39.502 13:39:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:13:39.502 13:39:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:13:39.502 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:39.503 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:13:39.503 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:39.503 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:39.503 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:39.503 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:13:39.503 13:39:33 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:39.503 13:39:33 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:13:40.072 [2024-11-06 13:39:33.751005] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64535 terminated unexpected 00:13:40.072 ===================================================== 00:13:40.072 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:40.072 ===================================================== 00:13:40.072 Controller Capabilities/Features 00:13:40.072 ================================ 00:13:40.072 Vendor ID: 1b36 00:13:40.072 Subsystem Vendor ID: 1af4 00:13:40.072 Serial Number: 12343 00:13:40.072 Model Number: QEMU NVMe Ctrl 00:13:40.072 Firmware Version: 8.0.0 00:13:40.072 Recommended Arb Burst: 6 00:13:40.072 IEEE OUI Identifier: 00 54 52 00:13:40.072 Multi-path I/O 00:13:40.072 May have multiple subsystem ports: No 00:13:40.072 May have multiple controllers: Yes 00:13:40.072 Associated with SR-IOV VF: No 00:13:40.072 Max Data Transfer Size: 524288 00:13:40.072 Max Number of Namespaces: 256 00:13:40.072 Max Number of I/O Queues: 64 00:13:40.072 NVMe Specification Version (VS): 1.4 00:13:40.072 NVMe Specification Version (Identify): 1.4 00:13:40.072 Maximum Queue Entries: 2048 00:13:40.072 Contiguous Queues Required: Yes 00:13:40.072 Arbitration Mechanisms Supported 00:13:40.072 Weighted Round Robin: Not Supported 00:13:40.072 Vendor Specific: Not Supported 00:13:40.072 Reset Timeout: 7500 ms 00:13:40.072 Doorbell Stride: 4 bytes 00:13:40.072 NVM Subsystem Reset: Not Supported 00:13:40.072 Command Sets Supported 00:13:40.072 NVM Command Set: Supported 00:13:40.072 Boot Partition: Not Supported 00:13:40.072 Memory Page Size Minimum: 4096 bytes 00:13:40.072 Memory Page Size Maximum: 65536 bytes 00:13:40.072 Persistent Memory Region: Not Supported 00:13:40.072 Optional Asynchronous Events Supported 00:13:40.072 Namespace Attribute Notices: Supported 00:13:40.072 Firmware Activation Notices: Not Supported 00:13:40.072 ANA Change Notices: Not Supported 00:13:40.072 PLE Aggregate Log Change Notices: Not Supported 00:13:40.072 LBA Status Info Alert Notices: Not Supported 00:13:40.072 EGE Aggregate Log Change Notices: Not Supported 00:13:40.072 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.072 Zone Descriptor Change Notices: Not Supported 00:13:40.072 Discovery Log Change Notices: Not Supported 00:13:40.072 Controller Attributes 00:13:40.072 128-bit Host Identifier: Not Supported 00:13:40.072 Non-Operational Permissive Mode: Not Supported 00:13:40.072 NVM Sets: Not Supported 00:13:40.072 Read Recovery Levels: Not Supported 00:13:40.072 Endurance Groups: Supported 00:13:40.072 Predictable Latency Mode: Not Supported 00:13:40.072 Traffic Based Keep ALive: Not Supported 00:13:40.072 Namespace Granularity: Not Supported 00:13:40.072 SQ Associations: Not Supported 00:13:40.072 UUID List: Not Supported 00:13:40.072 Multi-Domain Subsystem: Not Supported 00:13:40.072 Fixed Capacity Management: Not Supported 00:13:40.072 Variable Capacity Management: Not Supported 00:13:40.072 Delete Endurance Group: Not Supported 00:13:40.072 Delete NVM Set: Not Supported 00:13:40.072 Extended LBA Formats Supported: Supported 00:13:40.073 Flexible Data Placement Supported: Supported 00:13:40.073 00:13:40.073 Controller Memory Buffer Support 00:13:40.073 ================================ 00:13:40.073 Supported: No 00:13:40.073 00:13:40.073 Persistent Memory Region Support 00:13:40.073 ================================ 00:13:40.073 Supported: No 00:13:40.073 00:13:40.073 Admin Command Set Attributes 00:13:40.073 ============================ 00:13:40.073 Security Send/Receive: Not Supported 00:13:40.073 Format NVM: Supported 00:13:40.073 Firmware Activate/Download: Not Supported 00:13:40.073 Namespace Management: Supported 00:13:40.073 Device Self-Test: Not Supported 00:13:40.073 Directives: Supported 00:13:40.073 NVMe-MI: Not Supported 00:13:40.073 Virtualization Management: Not Supported 00:13:40.073 Doorbell Buffer Config: Supported 00:13:40.073 Get LBA Status Capability: Not Supported 00:13:40.073 Command & Feature Lockdown Capability: Not Supported 00:13:40.073 Abort Command Limit: 4 00:13:40.073 Async Event Request Limit: 4 00:13:40.073 Number of Firmware Slots: N/A 00:13:40.073 Firmware Slot 1 Read-Only: N/A 00:13:40.073 Firmware Activation Without Reset: N/A 00:13:40.073 Multiple Update Detection Support: N/A 00:13:40.073 Firmware Update Granularity: No Information Provided 00:13:40.073 Per-Namespace SMART Log: Yes 00:13:40.073 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.073 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:40.073 Command Effects Log Page: Supported 00:13:40.073 Get Log Page Extended Data: Supported 00:13:40.073 Telemetry Log Pages: Not Supported 00:13:40.073 Persistent Event Log Pages: Not Supported 00:13:40.073 Supported Log Pages Log Page: May Support 00:13:40.073 Commands Supported & Effects Log Page: Not Supported 00:13:40.073 Feature Identifiers & Effects Log Page:May Support 00:13:40.073 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.073 Data Area 4 for Telemetry Log: Not Supported 00:13:40.073 Error Log Page Entries Supported: 1 00:13:40.073 Keep Alive: Not Supported 00:13:40.073 00:13:40.073 NVM Command Set Attributes 00:13:40.073 ========================== 00:13:40.073 Submission Queue Entry Size 00:13:40.073 Max: 64 00:13:40.073 Min: 64 00:13:40.073 Completion Queue Entry Size 00:13:40.073 Max: 16 00:13:40.073 Min: 16 00:13:40.073 Number of Namespaces: 256 00:13:40.073 Compare Command: Supported 00:13:40.073 Write Uncorrectable Command: Not Supported 00:13:40.073 Dataset Management Command: Supported 00:13:40.073 Write Zeroes Command: Supported 00:13:40.073 Set Features Save Field: Supported 00:13:40.073 Reservations: Not Supported 00:13:40.073 Timestamp: Supported 00:13:40.073 Copy: Supported 00:13:40.073 Volatile Write Cache: Present 00:13:40.073 Atomic Write Unit (Normal): 1 00:13:40.073 Atomic Write Unit (PFail): 1 00:13:40.073 Atomic Compare & Write Unit: 1 00:13:40.073 Fused Compare & Write: Not Supported 00:13:40.073 Scatter-Gather List 00:13:40.073 SGL Command Set: Supported 00:13:40.073 SGL Keyed: Not Supported 00:13:40.073 SGL Bit Bucket Descriptor: Not Supported 00:13:40.073 SGL Metadata Pointer: Not Supported 00:13:40.073 Oversized SGL: Not Supported 00:13:40.073 SGL Metadata Address: Not Supported 00:13:40.073 SGL Offset: Not Supported 00:13:40.073 Transport SGL Data Block: Not Supported 00:13:40.073 Replay Protected Memory Block: Not Supported 00:13:40.073 00:13:40.073 Firmware Slot Information 00:13:40.073 ========================= 00:13:40.073 Active slot: 1 00:13:40.073 Slot 1 Firmware Revision: 1.0 00:13:40.073 00:13:40.073 00:13:40.073 Commands Supported and Effects 00:13:40.073 ============================== 00:13:40.073 Admin Commands 00:13:40.073 -------------- 00:13:40.073 Delete I/O Submission Queue (00h): Supported 00:13:40.073 Create I/O Submission Queue (01h): Supported 00:13:40.073 Get Log Page (02h): Supported 00:13:40.073 Delete I/O Completion Queue (04h): Supported 00:13:40.073 Create I/O Completion Queue (05h): Supported 00:13:40.073 Identify (06h): Supported 00:13:40.073 Abort (08h): Supported 00:13:40.073 Set Features (09h): Supported 00:13:40.073 Get Features (0Ah): Supported 00:13:40.073 Asynchronous Event Request (0Ch): Supported 00:13:40.073 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.073 Directive Send (19h): Supported 00:13:40.073 Directive Receive (1Ah): Supported 00:13:40.073 Virtualization Management (1Ch): Supported 00:13:40.073 Doorbell Buffer Config (7Ch): Supported 00:13:40.073 Format NVM (80h): Supported LBA-Change 00:13:40.073 I/O Commands 00:13:40.073 ------------ 00:13:40.073 Flush (00h): Supported LBA-Change 00:13:40.073 Write (01h): Supported LBA-Change 00:13:40.073 Read (02h): Supported 00:13:40.073 Compare (05h): Supported 00:13:40.073 Write Zeroes (08h): Supported LBA-Change 00:13:40.073 Dataset Management (09h): Supported LBA-Change 00:13:40.073 Unknown (0Ch): Supported 00:13:40.074 Unknown (12h): Supported 00:13:40.074 Copy (19h): Supported LBA-Change 00:13:40.074 Unknown (1Dh): Supported LBA-Change 00:13:40.074 00:13:40.074 Error Log 00:13:40.074 ========= 00:13:40.074 00:13:40.074 Arbitration 00:13:40.074 =========== 00:13:40.074 Arbitration Burst: no limit 00:13:40.074 00:13:40.074 Power Management 00:13:40.074 ================ 00:13:40.074 Number of Power States: 1 00:13:40.074 Current Power State: Power State #0 00:13:40.074 Power State #0: 00:13:40.074 Max Power: 25.00 W 00:13:40.074 Non-Operational State: Operational 00:13:40.074 Entry Latency: 16 microseconds 00:13:40.074 Exit Latency: 4 microseconds 00:13:40.074 Relative Read Throughput: 0 00:13:40.074 Relative Read Latency: 0 00:13:40.074 Relative Write Throughput: 0 00:13:40.074 Relative Write Latency: 0 00:13:40.074 Idle Power: Not Reported 00:13:40.074 Active Power: Not Reported 00:13:40.074 Non-Operational Permissive Mode: Not Supported 00:13:40.074 00:13:40.074 Health Information 00:13:40.074 ================== 00:13:40.074 Critical Warnings: 00:13:40.074 Available Spare Space: OK 00:13:40.074 Temperature: OK 00:13:40.074 Device Reliability: OK 00:13:40.074 Read Only: No 00:13:40.074 Volatile Memory Backup: OK 00:13:40.074 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.074 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.074 Available Spare: 0% 00:13:40.074 Available Spare Threshold: 0% 00:13:40.074 Life Percentage Used: 0% 00:13:40.074 Data Units Read: 699 00:13:40.074 Data Units Written: 628 00:13:40.074 Host Read Commands: 32960 00:13:40.074 Host Write Commands: 32383 00:13:40.074 Controller Busy Time: 0 minutes 00:13:40.074 Power Cycles: 0 00:13:40.074 Power On Hours: 0 hours 00:13:40.074 Unsafe Shutdowns: 0 00:13:40.074 Unrecoverable Media Errors: 0 00:13:40.074 Lifetime Error Log Entries: 0 00:13:40.074 Warning Temperature Time: 0 minutes 00:13:40.074 Critical Temperature Time: 0 minutes 00:13:40.074 00:13:40.074 Number of Queues 00:13:40.074 ================ 00:13:40.074 Number of I/O Submission Queues: 64 00:13:40.074 Number of I/O Completion Queues: 64 00:13:40.074 00:13:40.074 ZNS Specific Controller Data 00:13:40.074 ============================ 00:13:40.074 Zone Append Size Limit: 0 00:13:40.074 00:13:40.074 00:13:40.074 Active Namespaces 00:13:40.074 ================= 00:13:40.074 Namespace ID:1 00:13:40.074 Error Recovery Timeout: Unlimited 00:13:40.074 Command Set Identifier: NVM (00h) 00:13:40.074 Deallocate: Supported 00:13:40.074 Deallocated/Unwritten Error: Supported 00:13:40.074 Deallocated Read Value: All 0x00 00:13:40.074 Deallocate in Write Zeroes: Not Supported 00:13:40.074 Deallocated Guard Field: 0xFFFF 00:13:40.074 Flush: Supported 00:13:40.074 Reservation: Not Supported 00:13:40.074 Namespace Sharing Capabilities: Multiple Controllers 00:13:40.074 Size (in LBAs): 262144 (1GiB) 00:13:40.074 Capacity (in LBAs): 262144 (1GiB) 00:13:40.074 Utilization (in LBAs): 262144 (1GiB) 00:13:40.074 Thin Provisioning: Not Supported 00:13:40.074 Per-NS Atomic Units: No 00:13:40.074 Maximum Single Source Range Length: 128 00:13:40.074 Maximum Copy Length: 128 00:13:40.074 Maximum Source Range Count: 128 00:13:40.074 NGUID/EUI64 Never Reused: No 00:13:40.074 Namespace Write Protected: No 00:13:40.074 Endurance group ID: 1 00:13:40.074 Number of LBA Formats: 8 00:13:40.074 Current LBA Format: LBA Format #04 00:13:40.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.074 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.074 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.074 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.074 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.074 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.074 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.074 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.074 00:13:40.074 Get Feature FDP: 00:13:40.074 ================ 00:13:40.074 Enabled: Yes 00:13:40.074 FDP configuration index: 0 00:13:40.074 00:13:40.074 FDP configurations log page 00:13:40.074 =========================== 00:13:40.074 Number of FDP configurations: 1 00:13:40.074 Version: 0 00:13:40.074 Size: 112 00:13:40.074 FDP Configuration Descriptor: 0 00:13:40.074 Descriptor Size: 96 00:13:40.074 Reclaim Group Identifier format: 2 00:13:40.074 FDP Volatile Write Cache: Not Present 00:13:40.074 FDP Configuration: Valid 00:13:40.074 Vendor Specific Size: 0 00:13:40.074 Number of Reclaim Groups: 2 00:13:40.074 Number of Recalim Unit Handles: 8 00:13:40.074 Max Placement Identifiers: 128 00:13:40.074 Number of Namespaces Suppprted: 256 00:13:40.074 Reclaim unit Nominal Size: 6000000 bytes 00:13:40.074 Estimated Reclaim Unit Time Limit: Not Reported 00:13:40.074 RUH Desc #000: RUH Type: Initially Isolated 00:13:40.074 RUH Desc #001: RUH Type: Initially Isolated 00:13:40.074 RUH Desc #002: RUH Type: Initially Isolated 00:13:40.074 RUH Desc #003: RUH Type: Initially Isolated 00:13:40.074 RUH Desc #004: RUH Type: Initially Isolated 00:13:40.074 RUH Desc #005: RUH Type: Initially Isolated 00:13:40.074 RUH Desc #006: RUH Type: Initially Isolated 00:13:40.074 RUH Desc #007: RUH Type: Initially Isolated 00:13:40.074 00:13:40.074 FDP reclaim unit handle usage log page 00:13:40.074 ==================================[2024-11-06 13:39:33.753529] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64535 terminated unexpected 00:13:40.074 ==== 00:13:40.074 Number of Reclaim Unit Handles: 8 00:13:40.074 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:40.074 RUH Usage Desc #001: RUH Attributes: Unused 00:13:40.074 RUH Usage Desc #002: RUH Attributes: Unused 00:13:40.074 RUH Usage Desc #003: RUH Attributes: Unused 00:13:40.074 RUH Usage Desc #004: RUH Attributes: Unused 00:13:40.074 RUH Usage Desc #005: RUH Attributes: Unused 00:13:40.074 RUH Usage Desc #006: RUH Attributes: Unused 00:13:40.074 RUH Usage Desc #007: RUH Attributes: Unused 00:13:40.074 00:13:40.074 FDP statistics log page 00:13:40.074 ======================= 00:13:40.074 Host bytes with metadata written: 400596992 00:13:40.074 Media bytes with metadata written: 400637952 00:13:40.074 Media bytes erased: 0 00:13:40.074 00:13:40.074 FDP events log page 00:13:40.074 =================== 00:13:40.074 Number of FDP events: 0 00:13:40.074 00:13:40.074 NVM Specific Namespace Data 00:13:40.074 =========================== 00:13:40.074 Logical Block Storage Tag Mask: 0 00:13:40.074 Protection Information Capabilities: 00:13:40.074 16b Guard Protection Information Storage Tag Support: No 00:13:40.074 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.074 Storage Tag Check Read Support: No 00:13:40.074 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.074 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.074 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.074 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.074 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.075 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.075 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.075 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.075 ===================================================== 00:13:40.075 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:40.075 ===================================================== 00:13:40.075 Controller Capabilities/Features 00:13:40.075 ================================ 00:13:40.075 Vendor ID: 1b36 00:13:40.075 Subsystem Vendor ID: 1af4 00:13:40.075 Serial Number: 12340 00:13:40.075 Model Number: QEMU NVMe Ctrl 00:13:40.075 Firmware Version: 8.0.0 00:13:40.075 Recommended Arb Burst: 6 00:13:40.075 IEEE OUI Identifier: 00 54 52 00:13:40.075 Multi-path I/O 00:13:40.075 May have multiple subsystem ports: No 00:13:40.075 May have multiple controllers: No 00:13:40.075 Associated with SR-IOV VF: No 00:13:40.075 Max Data Transfer Size: 524288 00:13:40.075 Max Number of Namespaces: 256 00:13:40.075 Max Number of I/O Queues: 64 00:13:40.075 NVMe Specification Version (VS): 1.4 00:13:40.075 NVMe Specification Version (Identify): 1.4 00:13:40.075 Maximum Queue Entries: 2048 00:13:40.075 Contiguous Queues Required: Yes 00:13:40.075 Arbitration Mechanisms Supported 00:13:40.075 Weighted Round Robin: Not Supported 00:13:40.075 Vendor Specific: Not Supported 00:13:40.075 Reset Timeout: 7500 ms 00:13:40.075 Doorbell Stride: 4 bytes 00:13:40.075 NVM Subsystem Reset: Not Supported 00:13:40.075 Command Sets Supported 00:13:40.075 NVM Command Set: Supported 00:13:40.075 Boot Partition: Not Supported 00:13:40.075 Memory Page Size Minimum: 4096 bytes 00:13:40.075 Memory Page Size Maximum: 65536 bytes 00:13:40.075 Persistent Memory Region: Not Supported 00:13:40.075 Optional Asynchronous Events Supported 00:13:40.075 Namespace Attribute Notices: Supported 00:13:40.075 Firmware Activation Notices: Not Supported 00:13:40.075 ANA Change Notices: Not Supported 00:13:40.075 PLE Aggregate Log Change Notices: Not Supported 00:13:40.075 LBA Status Info Alert Notices: Not Supported 00:13:40.075 EGE Aggregate Log Change Notices: Not Supported 00:13:40.075 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.075 Zone Descriptor Change Notices: Not Supported 00:13:40.075 Discovery Log Change Notices: Not Supported 00:13:40.075 Controller Attributes 00:13:40.075 128-bit Host Identifier: Not Supported 00:13:40.075 Non-Operational Permissive Mode: Not Supported 00:13:40.075 NVM Sets: Not Supported 00:13:40.075 Read Recovery Levels: Not Supported 00:13:40.075 Endurance Groups: Not Supported 00:13:40.075 Predictable Latency Mode: Not Supported 00:13:40.075 Traffic Based Keep ALive: Not Supported 00:13:40.075 Namespace Granularity: Not Supported 00:13:40.075 SQ Associations: Not Supported 00:13:40.075 UUID List: Not Supported 00:13:40.075 Multi-Domain Subsystem: Not Supported 00:13:40.075 Fixed Capacity Management: Not Supported 00:13:40.075 Variable Capacity Management: Not Supported 00:13:40.075 Delete Endurance Group: Not Supported 00:13:40.075 Delete NVM Set: Not Supported 00:13:40.075 Extended LBA Formats Supported: Supported 00:13:40.075 Flexible Data Placement Supported: Not Supported 00:13:40.075 00:13:40.075 Controller Memory Buffer Support 00:13:40.075 ================================ 00:13:40.075 Supported: No 00:13:40.075 00:13:40.075 Persistent Memory Region Support 00:13:40.075 ================================ 00:13:40.075 Supported: No 00:13:40.075 00:13:40.075 Admin Command Set Attributes 00:13:40.075 ============================ 00:13:40.075 Security Send/Receive: Not Supported 00:13:40.075 Format NVM: Supported 00:13:40.075 Firmware Activate/Download: Not Supported 00:13:40.075 Namespace Management: Supported 00:13:40.075 Device Self-Test: Not Supported 00:13:40.075 Directives: Supported 00:13:40.075 NVMe-MI: Not Supported 00:13:40.075 Virtualization Management: Not Supported 00:13:40.075 Doorbell Buffer Config: Supported 00:13:40.075 Get LBA Status Capability: Not Supported 00:13:40.075 Command & Feature Lockdown Capability: Not Supported 00:13:40.075 Abort Command Limit: 4 00:13:40.075 Async Event Request Limit: 4 00:13:40.075 Number of Firmware Slots: N/A 00:13:40.075 Firmware Slot 1 Read-Only: N/A 00:13:40.075 Firmware Activation Without Reset: N/A 00:13:40.075 Multiple Update Detection Support: N/A 00:13:40.075 Firmware Update Granularity: No Information Provided 00:13:40.075 Per-Namespace SMART Log: Yes 00:13:40.075 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.075 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:40.075 Command Effects Log Page: Supported 00:13:40.075 Get Log Page Extended Data: Supported 00:13:40.075 Telemetry Log Pages: Not Supported 00:13:40.075 Persistent Event Log Pages: Not Supported 00:13:40.075 Supported Log Pages Log Page: May Support 00:13:40.075 Commands Supported & Effects Log Page: Not Supported 00:13:40.075 Feature Identifiers & Effects Log Page:May Support 00:13:40.075 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.075 Data Area 4 for Telemetry Log: Not Supported 00:13:40.075 Error Log Page Entries Supported: 1 00:13:40.075 Keep Alive: Not Supported 00:13:40.075 00:13:40.075 NVM Command Set Attributes 00:13:40.075 ========================== 00:13:40.075 Submission Queue Entry Size 00:13:40.075 Max: 64 00:13:40.075 Min: 64 00:13:40.075 Completion Queue Entry Size 00:13:40.075 Max: 16 00:13:40.075 Min: 16 00:13:40.075 Number of Namespaces: 256 00:13:40.075 Compare Command: Supported 00:13:40.075 Write Uncorrectable Command: Not Supported 00:13:40.075 Dataset Management Command: Supported 00:13:40.075 Write Zeroes Command: Supported 00:13:40.075 Set Features Save Field: Supported 00:13:40.075 Reservations: Not Supported 00:13:40.076 Timestamp: Supported 00:13:40.076 Copy: Supported 00:13:40.076 Volatile Write Cache: Present 00:13:40.076 Atomic Write Unit (Normal): 1 00:13:40.076 Atomic Write Unit (PFail): 1 00:13:40.076 Atomic Compare & Write Unit: 1 00:13:40.076 Fused Compare & Write: Not Supported 00:13:40.076 Scatter-Gather List 00:13:40.076 SGL Command Set: Supported 00:13:40.076 SGL Keyed: Not Supported 00:13:40.076 SGL Bit Bucket Descriptor: Not Supported 00:13:40.076 SGL Metadata Pointer: Not Supported 00:13:40.076 Oversized SGL: Not Supported 00:13:40.076 SGL Metadata Address: Not Supported 00:13:40.076 SGL Offset: Not Supported 00:13:40.076 Transport SGL Data Block: Not Supported 00:13:40.076 Replay Protected Memory Block: Not Supported 00:13:40.076 00:13:40.076 Firmware Slot Information 00:13:40.076 ========================= 00:13:40.076 Active slot: 1 00:13:40.076 Slot 1 Firmware Revision: 1.0 00:13:40.076 00:13:40.076 00:13:40.076 Commands Supported and Effects 00:13:40.076 ============================== 00:13:40.076 Admin Commands 00:13:40.076 -------------- 00:13:40.076 Delete I/O Submission Queue (00h): Supported 00:13:40.076 Create I/O Submission Queue (01h): Supported 00:13:40.076 Get Log Page (02h): Supported 00:13:40.076 Delete I/O Completion Queue (04h): Supported 00:13:40.076 Create I/O Completion Queue (05h): Supported 00:13:40.076 Identify (06h): Supported 00:13:40.076 Abort (08h): Supported 00:13:40.076 Set Features (09h): Supported 00:13:40.076 Get Features (0Ah): Supported 00:13:40.076 Asynchronous Event Request (0Ch): Supported 00:13:40.076 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.076 Directive Send (19h): Supported 00:13:40.076 Directive Receive (1Ah): Supported 00:13:40.076 Virtualization Management (1Ch): Supported 00:13:40.076 Doorbell Buffer Config (7Ch): Supported 00:13:40.076 Format NVM (80h): Supported LBA-Change 00:13:40.076 I/O Commands 00:13:40.076 ------------ 00:13:40.076 Flush (00h): Supported LBA-Change 00:13:40.076 Write (01h): Supported LBA-Change 00:13:40.076 Read (02h): Supported 00:13:40.076 Compare (05h): Supported 00:13:40.076 Write Zeroes (08h): Supported LBA-Change 00:13:40.076 Dataset Management (09h): Supported LBA-Change 00:13:40.076 Unknown (0Ch): Supported 00:13:40.076 Unknown (12h): Supported 00:13:40.076 Copy (19h): Supported LBA-Change 00:13:40.076 Unknown (1Dh): Supported LBA-Change 00:13:40.076 00:13:40.076 Error Log 00:13:40.076 ========= 00:13:40.076 00:13:40.076 Arbitration 00:13:40.076 =========== 00:13:40.076 Arbitration Burst: no limit 00:13:40.076 00:13:40.076 Power Management 00:13:40.076 ================ 00:13:40.076 Number of Power States: 1 00:13:40.076 Current Power State: Power State #0 00:13:40.076 Power State #0: 00:13:40.076 Max Power: 25.00 W 00:13:40.076 Non-Operational State: Operational 00:13:40.076 Entry Latency: 16 microseconds 00:13:40.076 Exit Latency: 4 microseconds 00:13:40.076 Relative Read Throughput: 0 00:13:40.076 Relative Read Latency: 0 00:13:40.076 Relative Write Throughput: 0 00:13:40.076 Relative Write Latency: 0 00:13:40.076 Idle Power: Not Reported 00:13:40.076 Active Power: Not Reported 00:13:40.076 Non-Operational Permissive Mode: Not Supported 00:13:40.076 00:13:40.076 Health Information 00:13:40.076 ================== 00:13:40.076 Critical Warnings: 00:13:40.076 Available Spare Space: OK 00:13:40.076 Temperature: OK 00:13:40.076 Device Reliability: OK 00:13:40.076 Read Only: No 00:13:40.076 Volatile Memory Backup: OK 00:13:40.076 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.076 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.076 Available Spare: 0% 00:13:40.076 Available Spare Threshold: 0% 00:13:40.076 Life Percentage Used: 0% 00:13:40.076 Data Units Read: 616 00:13:40.076 Data Units Written: 544 00:13:40.076 Host Read Commands: 32070 00:13:40.076 Host Write Commands: 31856 00:13:40.076 Controller Busy Time: 0 minutes 00:13:40.076 Power Cycles: 0 00:13:40.076 Power On Hours: 0 hours 00:13:40.076 Unsafe Shutdowns: 0 00:13:40.076 Unrecoverable Media Errors: 0 00:13:40.076 Lifetime Error Log Entries: 0 00:13:40.076 Warning Temperature Time: 0 minutes 00:13:40.076 Critical Temperature Time: 0 minutes 00:13:40.076 00:13:40.076 Number of Queues 00:13:40.076 ================ 00:13:40.076 Number of I/O Submission Queues: 64 00:13:40.076 Number of I/O Completion Queues: 64 00:13:40.076 00:13:40.076 ZNS Specific Controller Data 00:13:40.076 ============================ 00:13:40.076 Zone Append Size Limit: 0 00:13:40.076 00:13:40.076 00:13:40.076 Active Namespaces 00:13:40.076 ================= 00:13:40.076 Namespace ID:1 00:13:40.076 Error Recovery Timeout: Unlimited 00:13:40.076 Command Set Identifier: NVM (00h) 00:13:40.076 Deallocate: Supported 00:13:40.076 Deallocated/Unwritten Error: Supported 00:13:40.076 Deallocated Read Value: All 0x00 00:13:40.076 Deallocate in Write Zeroes: Not Supported 00:13:40.076 Deallocated Guard Field: 0xFFFF 00:13:40.076 Flush: Supported 00:13:40.076 Reservation: Not Supported 00:13:40.076 Metadata Transferred as: Separate Metadata Buffer 00:13:40.076 Namespace Sharing Capabilities: Private 00:13:40.076 Size (in LBAs): 1548666 (5GiB) 00:13:40.076 Capacity (in LBAs): 1548666 (5GiB) 00:13:40.076 Utilization (in LBAs): 1548666 (5GiB) 00:13:40.076 Thin Provisioning: Not Supported 00:13:40.076 Per-NS Atomic Units: No 00:13:40.076 Maximum Single Source Range Length: 128 00:13:40.076 Maximum Copy Length: 128 00:13:40.076 Maximum Source Range Count: 128 00:13:40.076 NGUID/EUI64 Never Reused: No 00:13:40.076 Namespace Write Protected: No 00:13:40.076 Number of LBA Formats: 8 00:13:40.076 Current LBA Format: [2024-11-06 13:39:33.754519] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64535 terminated unexpected 00:13:40.076 LBA Format #07 00:13:40.076 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.076 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.076 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.076 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.076 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.076 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.076 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.076 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.076 00:13:40.076 NVM Specific Namespace Data 00:13:40.076 =========================== 00:13:40.076 Logical Block Storage Tag Mask: 0 00:13:40.076 Protection Information Capabilities: 00:13:40.076 16b Guard Protection Information Storage Tag Support: No 00:13:40.076 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.076 Storage Tag Check Read Support: No 00:13:40.076 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.076 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.076 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.076 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.076 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.076 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.076 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.077 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.077 ===================================================== 00:13:40.077 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:40.077 ===================================================== 00:13:40.077 Controller Capabilities/Features 00:13:40.077 ================================ 00:13:40.077 Vendor ID: 1b36 00:13:40.077 Subsystem Vendor ID: 1af4 00:13:40.077 Serial Number: 12341 00:13:40.077 Model Number: QEMU NVMe Ctrl 00:13:40.077 Firmware Version: 8.0.0 00:13:40.077 Recommended Arb Burst: 6 00:13:40.077 IEEE OUI Identifier: 00 54 52 00:13:40.077 Multi-path I/O 00:13:40.077 May have multiple subsystem ports: No 00:13:40.077 May have multiple controllers: No 00:13:40.077 Associated with SR-IOV VF: No 00:13:40.077 Max Data Transfer Size: 524288 00:13:40.077 Max Number of Namespaces: 256 00:13:40.077 Max Number of I/O Queues: 64 00:13:40.077 NVMe Specification Version (VS): 1.4 00:13:40.077 NVMe Specification Version (Identify): 1.4 00:13:40.077 Maximum Queue Entries: 2048 00:13:40.077 Contiguous Queues Required: Yes 00:13:40.077 Arbitration Mechanisms Supported 00:13:40.077 Weighted Round Robin: Not Supported 00:13:40.077 Vendor Specific: Not Supported 00:13:40.077 Reset Timeout: 7500 ms 00:13:40.077 Doorbell Stride: 4 bytes 00:13:40.077 NVM Subsystem Reset: Not Supported 00:13:40.077 Command Sets Supported 00:13:40.077 NVM Command Set: Supported 00:13:40.077 Boot Partition: Not Supported 00:13:40.077 Memory Page Size Minimum: 4096 bytes 00:13:40.077 Memory Page Size Maximum: 65536 bytes 00:13:40.077 Persistent Memory Region: Not Supported 00:13:40.077 Optional Asynchronous Events Supported 00:13:40.077 Namespace Attribute Notices: Supported 00:13:40.077 Firmware Activation Notices: Not Supported 00:13:40.077 ANA Change Notices: Not Supported 00:13:40.077 PLE Aggregate Log Change Notices: Not Supported 00:13:40.077 LBA Status Info Alert Notices: Not Supported 00:13:40.077 EGE Aggregate Log Change Notices: Not Supported 00:13:40.077 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.077 Zone Descriptor Change Notices: Not Supported 00:13:40.077 Discovery Log Change Notices: Not Supported 00:13:40.077 Controller Attributes 00:13:40.077 128-bit Host Identifier: Not Supported 00:13:40.077 Non-Operational Permissive Mode: Not Supported 00:13:40.077 NVM Sets: Not Supported 00:13:40.077 Read Recovery Levels: Not Supported 00:13:40.077 Endurance Groups: Not Supported 00:13:40.077 Predictable Latency Mode: Not Supported 00:13:40.077 Traffic Based Keep ALive: Not Supported 00:13:40.077 Namespace Granularity: Not Supported 00:13:40.077 SQ Associations: Not Supported 00:13:40.077 UUID List: Not Supported 00:13:40.077 Multi-Domain Subsystem: Not Supported 00:13:40.077 Fixed Capacity Management: Not Supported 00:13:40.077 Variable Capacity Management: Not Supported 00:13:40.077 Delete Endurance Group: Not Supported 00:13:40.077 Delete NVM Set: Not Supported 00:13:40.077 Extended LBA Formats Supported: Supported 00:13:40.077 Flexible Data Placement Supported: Not Supported 00:13:40.077 00:13:40.077 Controller Memory Buffer Support 00:13:40.077 ================================ 00:13:40.077 Supported: No 00:13:40.077 00:13:40.077 Persistent Memory Region Support 00:13:40.077 ================================ 00:13:40.077 Supported: No 00:13:40.077 00:13:40.077 Admin Command Set Attributes 00:13:40.077 ============================ 00:13:40.077 Security Send/Receive: Not Supported 00:13:40.077 Format NVM: Supported 00:13:40.077 Firmware Activate/Download: Not Supported 00:13:40.077 Namespace Management: Supported 00:13:40.077 Device Self-Test: Not Supported 00:13:40.077 Directives: Supported 00:13:40.077 NVMe-MI: Not Supported 00:13:40.077 Virtualization Management: Not Supported 00:13:40.077 Doorbell Buffer Config: Supported 00:13:40.077 Get LBA Status Capability: Not Supported 00:13:40.077 Command & Feature Lockdown Capability: Not Supported 00:13:40.077 Abort Command Limit: 4 00:13:40.077 Async Event Request Limit: 4 00:13:40.077 Number of Firmware Slots: N/A 00:13:40.077 Firmware Slot 1 Read-Only: N/A 00:13:40.077 Firmware Activation Without Reset: N/A 00:13:40.077 Multiple Update Detection Support: N/A 00:13:40.077 Firmware Update Granularity: No Information Provided 00:13:40.077 Per-Namespace SMART Log: Yes 00:13:40.077 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.077 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:40.077 Command Effects Log Page: Supported 00:13:40.077 Get Log Page Extended Data: Supported 00:13:40.077 Telemetry Log Pages: Not Supported 00:13:40.077 Persistent Event Log Pages: Not Supported 00:13:40.077 Supported Log Pages Log Page: May Support 00:13:40.077 Commands Supported & Effects Log Page: Not Supported 00:13:40.077 Feature Identifiers & Effects Log Page:May Support 00:13:40.077 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.077 Data Area 4 for Telemetry Log: Not Supported 00:13:40.077 Error Log Page Entries Supported: 1 00:13:40.077 Keep Alive: Not Supported 00:13:40.077 00:13:40.077 NVM Command Set Attributes 00:13:40.077 ========================== 00:13:40.077 Submission Queue Entry Size 00:13:40.077 Max: 64 00:13:40.077 Min: 64 00:13:40.077 Completion Queue Entry Size 00:13:40.077 Max: 16 00:13:40.077 Min: 16 00:13:40.077 Number of Namespaces: 256 00:13:40.077 Compare Command: Supported 00:13:40.077 Write Uncorrectable Command: Not Supported 00:13:40.077 Dataset Management Command: Supported 00:13:40.077 Write Zeroes Command: Supported 00:13:40.077 Set Features Save Field: Supported 00:13:40.077 Reservations: Not Supported 00:13:40.077 Timestamp: Supported 00:13:40.077 Copy: Supported 00:13:40.077 Volatile Write Cache: Present 00:13:40.077 Atomic Write Unit (Normal): 1 00:13:40.077 Atomic Write Unit (PFail): 1 00:13:40.077 Atomic Compare & Write Unit: 1 00:13:40.077 Fused Compare & Write: Not Supported 00:13:40.077 Scatter-Gather List 00:13:40.077 SGL Command Set: Supported 00:13:40.077 SGL Keyed: Not Supported 00:13:40.077 SGL Bit Bucket Descriptor: Not Supported 00:13:40.077 SGL Metadata Pointer: Not Supported 00:13:40.077 Oversized SGL: Not Supported 00:13:40.077 SGL Metadata Address: Not Supported 00:13:40.077 SGL Offset: Not Supported 00:13:40.077 Transport SGL Data Block: Not Supported 00:13:40.077 Replay Protected Memory Block: Not Supported 00:13:40.077 00:13:40.077 Firmware Slot Information 00:13:40.077 ========================= 00:13:40.077 Active slot: 1 00:13:40.077 Slot 1 Firmware Revision: 1.0 00:13:40.077 00:13:40.077 00:13:40.077 Commands Supported and Effects 00:13:40.077 ============================== 00:13:40.077 Admin Commands 00:13:40.077 -------------- 00:13:40.077 Delete I/O Submission Queue (00h): Supported 00:13:40.077 Create I/O Submission Queue (01h): Supported 00:13:40.077 Get Log Page (02h): Supported 00:13:40.077 Delete I/O Completion Queue (04h): Supported 00:13:40.077 Create I/O Completion Queue (05h): Supported 00:13:40.077 Identify (06h): Supported 00:13:40.077 Abort (08h): Supported 00:13:40.077 Set Features (09h): Supported 00:13:40.077 Get Features (0Ah): Supported 00:13:40.077 Asynchronous Event Request (0Ch): Supported 00:13:40.077 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.077 Directive Send (19h): Supported 00:13:40.077 Directive Receive (1Ah): Supported 00:13:40.077 Virtualization Management (1Ch): Supported 00:13:40.077 Doorbell Buffer Config (7Ch): Supported 00:13:40.077 Format NVM (80h): Supported LBA-Change 00:13:40.077 I/O Commands 00:13:40.077 ------------ 00:13:40.077 Flush (00h): Supported LBA-Change 00:13:40.077 Write (01h): Supported LBA-Change 00:13:40.077 Read (02h): Supported 00:13:40.077 Compare (05h): Supported 00:13:40.077 Write Zeroes (08h): Supported LBA-Change 00:13:40.077 Dataset Management (09h): Supported LBA-Change 00:13:40.077 Unknown (0Ch): Supported 00:13:40.077 Unknown (12h): Supported 00:13:40.077 Copy (19h): Supported LBA-Change 00:13:40.077 Unknown (1Dh): Supported LBA-Change 00:13:40.077 00:13:40.077 Error Log 00:13:40.077 ========= 00:13:40.077 00:13:40.077 Arbitration 00:13:40.077 =========== 00:13:40.077 Arbitration Burst: no limit 00:13:40.077 00:13:40.077 Power Management 00:13:40.077 ================ 00:13:40.077 Number of Power States: 1 00:13:40.077 Current Power State: Power State #0 00:13:40.078 Power State #0: 00:13:40.078 Max Power: 25.00 W 00:13:40.078 Non-Operational State: Operational 00:13:40.078 Entry Latency: 16 microseconds 00:13:40.078 Exit Latency: 4 microseconds 00:13:40.078 Relative Read Throughput: 0 00:13:40.078 Relative Read Latency: 0 00:13:40.078 Relative Write Throughput: 0 00:13:40.078 Relative Write Latency: 0 00:13:40.078 Idle Power: Not Reported 00:13:40.078 Active Power: Not Reported 00:13:40.078 Non-Operational Permissive Mode: Not Supported 00:13:40.078 00:13:40.078 Health Information 00:13:40.078 ================== 00:13:40.078 Critical Warnings: 00:13:40.078 Available Spare Space: OK 00:13:40.078 Temperature: OK 00:13:40.078 Device Reliability: OK 00:13:40.078 Read Only: No 00:13:40.078 Volatile Memory Backup: OK 00:13:40.078 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.078 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.078 Available Spare: 0% 00:13:40.078 Available Spare Threshold: 0% 00:13:40.078 Life Percentage Used: 0% 00:13:40.078 Data Units Read: 975 00:13:40.078 Data Units Written: 843 00:13:40.078 Host Read Commands: 47933 00:13:40.078 Host Write Commands: 46734 00:13:40.078 Controller Busy Time: 0 minutes 00:13:40.078 Power Cycles: 0 00:13:40.078 Power On Hours: 0 hours 00:13:40.078 Unsafe Shutdowns: 0 00:13:40.078 Unrecoverable Media Errors: 0 00:13:40.078 Lifetime Error Log Entries: 0 00:13:40.078 Warning Temperature Time: 0 minutes 00:13:40.078 Critical Temperature Time: 0 minutes 00:13:40.078 00:13:40.078 Number of Queues 00:13:40.078 ================ 00:13:40.078 Number of I/O Submission Queues: 64 00:13:40.078 Number of I/O Completion Queues: 64 00:13:40.078 00:13:40.078 ZNS Specific Controller Data 00:13:40.078 ============================ 00:13:40.078 Zone Append Size Limit: 0 00:13:40.078 00:13:40.078 00:13:40.078 Active Namespaces 00:13:40.078 ================= 00:13:40.078 Namespace ID:1 00:13:40.078 Error Recovery Timeout: Unlimited 00:13:40.078 Command Set Identifier: NVM (00h) 00:13:40.078 Deallocate: Supported 00:13:40.078 Deallocated/Unwritten Error: Supported 00:13:40.078 Deallocated Read Value: All 0x00 00:13:40.078 Deallocate in Write Zeroes: Not Supported 00:13:40.078 Deallocated Guard Field: 0xFFFF 00:13:40.078 Flush: Supported 00:13:40.078 Reservation: Not Supported 00:13:40.078 Namespace Sharing Capabilities: Private 00:13:40.078 Size (in LBAs): 1310720 (5GiB) 00:13:40.078 Capacity (in LBAs): 1310720 (5GiB) 00:13:40.078 Utilization (in LBAs): 1310720 (5GiB) 00:13:40.078 Thin Provisioning: Not Supported 00:13:40.078 Per-NS Atomic Units: No 00:13:40.078 Maximum Single Source Range Length: 128 00:13:40.078 Maximum Copy Length: 128 00:13:40.078 Maximum Source Range Count: 128 00:13:40.078 NGUID/EUI64 Never Reused: No 00:13:40.078 Namespace Write Protected: No 00:13:40.078 Number of LBA Formats: 8 00:13:40.078 Current LBA Format: LBA Format #04 00:13:40.078 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.078 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.078 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.078 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.078 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.078 LBA Format[2024-11-06 13:39:33.755559] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64535 terminated unexpected 00:13:40.078 #05: Data Size: 4096 Metadata Size: 8 00:13:40.078 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.078 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.078 00:13:40.078 NVM Specific Namespace Data 00:13:40.078 =========================== 00:13:40.078 Logical Block Storage Tag Mask: 0 00:13:40.078 Protection Information Capabilities: 00:13:40.078 16b Guard Protection Information Storage Tag Support: No 00:13:40.078 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.078 Storage Tag Check Read Support: No 00:13:40.078 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.078 ===================================================== 00:13:40.078 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:40.078 ===================================================== 00:13:40.078 Controller Capabilities/Features 00:13:40.078 ================================ 00:13:40.078 Vendor ID: 1b36 00:13:40.078 Subsystem Vendor ID: 1af4 00:13:40.078 Serial Number: 12342 00:13:40.078 Model Number: QEMU NVMe Ctrl 00:13:40.078 Firmware Version: 8.0.0 00:13:40.078 Recommended Arb Burst: 6 00:13:40.078 IEEE OUI Identifier: 00 54 52 00:13:40.078 Multi-path I/O 00:13:40.078 May have multiple subsystem ports: No 00:13:40.078 May have multiple controllers: No 00:13:40.078 Associated with SR-IOV VF: No 00:13:40.078 Max Data Transfer Size: 524288 00:13:40.078 Max Number of Namespaces: 256 00:13:40.078 Max Number of I/O Queues: 64 00:13:40.078 NVMe Specification Version (VS): 1.4 00:13:40.078 NVMe Specification Version (Identify): 1.4 00:13:40.078 Maximum Queue Entries: 2048 00:13:40.078 Contiguous Queues Required: Yes 00:13:40.078 Arbitration Mechanisms Supported 00:13:40.078 Weighted Round Robin: Not Supported 00:13:40.078 Vendor Specific: Not Supported 00:13:40.078 Reset Timeout: 7500 ms 00:13:40.078 Doorbell Stride: 4 bytes 00:13:40.078 NVM Subsystem Reset: Not Supported 00:13:40.078 Command Sets Supported 00:13:40.078 NVM Command Set: Supported 00:13:40.078 Boot Partition: Not Supported 00:13:40.078 Memory Page Size Minimum: 4096 bytes 00:13:40.078 Memory Page Size Maximum: 65536 bytes 00:13:40.078 Persistent Memory Region: Not Supported 00:13:40.078 Optional Asynchronous Events Supported 00:13:40.078 Namespace Attribute Notices: Supported 00:13:40.078 Firmware Activation Notices: Not Supported 00:13:40.078 ANA Change Notices: Not Supported 00:13:40.078 PLE Aggregate Log Change Notices: Not Supported 00:13:40.078 LBA Status Info Alert Notices: Not Supported 00:13:40.078 EGE Aggregate Log Change Notices: Not Supported 00:13:40.078 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.078 Zone Descriptor Change Notices: Not Supported 00:13:40.078 Discovery Log Change Notices: Not Supported 00:13:40.078 Controller Attributes 00:13:40.078 128-bit Host Identifier: Not Supported 00:13:40.078 Non-Operational Permissive Mode: Not Supported 00:13:40.078 NVM Sets: Not Supported 00:13:40.078 Read Recovery Levels: Not Supported 00:13:40.078 Endurance Groups: Not Supported 00:13:40.078 Predictable Latency Mode: Not Supported 00:13:40.078 Traffic Based Keep ALive: Not Supported 00:13:40.078 Namespace Granularity: Not Supported 00:13:40.078 SQ Associations: Not Supported 00:13:40.078 UUID List: Not Supported 00:13:40.078 Multi-Domain Subsystem: Not Supported 00:13:40.079 Fixed Capacity Management: Not Supported 00:13:40.079 Variable Capacity Management: Not Supported 00:13:40.079 Delete Endurance Group: Not Supported 00:13:40.079 Delete NVM Set: Not Supported 00:13:40.079 Extended LBA Formats Supported: Supported 00:13:40.079 Flexible Data Placement Supported: Not Supported 00:13:40.079 00:13:40.079 Controller Memory Buffer Support 00:13:40.079 ================================ 00:13:40.079 Supported: No 00:13:40.079 00:13:40.079 Persistent Memory Region Support 00:13:40.079 ================================ 00:13:40.079 Supported: No 00:13:40.079 00:13:40.079 Admin Command Set Attributes 00:13:40.079 ============================ 00:13:40.079 Security Send/Receive: Not Supported 00:13:40.079 Format NVM: Supported 00:13:40.079 Firmware Activate/Download: Not Supported 00:13:40.079 Namespace Management: Supported 00:13:40.079 Device Self-Test: Not Supported 00:13:40.079 Directives: Supported 00:13:40.079 NVMe-MI: Not Supported 00:13:40.079 Virtualization Management: Not Supported 00:13:40.079 Doorbell Buffer Config: Supported 00:13:40.079 Get LBA Status Capability: Not Supported 00:13:40.079 Command & Feature Lockdown Capability: Not Supported 00:13:40.079 Abort Command Limit: 4 00:13:40.079 Async Event Request Limit: 4 00:13:40.079 Number of Firmware Slots: N/A 00:13:40.079 Firmware Slot 1 Read-Only: N/A 00:13:40.079 Firmware Activation Without Reset: N/A 00:13:40.079 Multiple Update Detection Support: N/A 00:13:40.079 Firmware Update Granularity: No Information Provided 00:13:40.079 Per-Namespace SMART Log: Yes 00:13:40.079 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.079 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:40.079 Command Effects Log Page: Supported 00:13:40.079 Get Log Page Extended Data: Supported 00:13:40.079 Telemetry Log Pages: Not Supported 00:13:40.079 Persistent Event Log Pages: Not Supported 00:13:40.079 Supported Log Pages Log Page: May Support 00:13:40.079 Commands Supported & Effects Log Page: Not Supported 00:13:40.079 Feature Identifiers & Effects Log Page:May Support 00:13:40.079 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.079 Data Area 4 for Telemetry Log: Not Supported 00:13:40.079 Error Log Page Entries Supported: 1 00:13:40.079 Keep Alive: Not Supported 00:13:40.079 00:13:40.079 NVM Command Set Attributes 00:13:40.079 ========================== 00:13:40.079 Submission Queue Entry Size 00:13:40.079 Max: 64 00:13:40.079 Min: 64 00:13:40.079 Completion Queue Entry Size 00:13:40.079 Max: 16 00:13:40.079 Min: 16 00:13:40.079 Number of Namespaces: 256 00:13:40.079 Compare Command: Supported 00:13:40.079 Write Uncorrectable Command: Not Supported 00:13:40.079 Dataset Management Command: Supported 00:13:40.079 Write Zeroes Command: Supported 00:13:40.079 Set Features Save Field: Supported 00:13:40.079 Reservations: Not Supported 00:13:40.079 Timestamp: Supported 00:13:40.079 Copy: Supported 00:13:40.079 Volatile Write Cache: Present 00:13:40.079 Atomic Write Unit (Normal): 1 00:13:40.079 Atomic Write Unit (PFail): 1 00:13:40.079 Atomic Compare & Write Unit: 1 00:13:40.079 Fused Compare & Write: Not Supported 00:13:40.079 Scatter-Gather List 00:13:40.079 SGL Command Set: Supported 00:13:40.079 SGL Keyed: Not Supported 00:13:40.079 SGL Bit Bucket Descriptor: Not Supported 00:13:40.079 SGL Metadata Pointer: Not Supported 00:13:40.079 Oversized SGL: Not Supported 00:13:40.079 SGL Metadata Address: Not Supported 00:13:40.079 SGL Offset: Not Supported 00:13:40.079 Transport SGL Data Block: Not Supported 00:13:40.079 Replay Protected Memory Block: Not Supported 00:13:40.079 00:13:40.079 Firmware Slot Information 00:13:40.079 ========================= 00:13:40.079 Active slot: 1 00:13:40.079 Slot 1 Firmware Revision: 1.0 00:13:40.079 00:13:40.079 00:13:40.079 Commands Supported and Effects 00:13:40.079 ============================== 00:13:40.079 Admin Commands 00:13:40.079 -------------- 00:13:40.079 Delete I/O Submission Queue (00h): Supported 00:13:40.079 Create I/O Submission Queue (01h): Supported 00:13:40.079 Get Log Page (02h): Supported 00:13:40.079 Delete I/O Completion Queue (04h): Supported 00:13:40.079 Create I/O Completion Queue (05h): Supported 00:13:40.079 Identify (06h): Supported 00:13:40.079 Abort (08h): Supported 00:13:40.079 Set Features (09h): Supported 00:13:40.079 Get Features (0Ah): Supported 00:13:40.079 Asynchronous Event Request (0Ch): Supported 00:13:40.079 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.079 Directive Send (19h): Supported 00:13:40.079 Directive Receive (1Ah): Supported 00:13:40.079 Virtualization Management (1Ch): Supported 00:13:40.079 Doorbell Buffer Config (7Ch): Supported 00:13:40.079 Format NVM (80h): Supported LBA-Change 00:13:40.079 I/O Commands 00:13:40.079 ------------ 00:13:40.079 Flush (00h): Supported LBA-Change 00:13:40.079 Write (01h): Supported LBA-Change 00:13:40.079 Read (02h): Supported 00:13:40.079 Compare (05h): Supported 00:13:40.079 Write Zeroes (08h): Supported LBA-Change 00:13:40.079 Dataset Management (09h): Supported LBA-Change 00:13:40.079 Unknown (0Ch): Supported 00:13:40.079 Unknown (12h): Supported 00:13:40.079 Copy (19h): Supported LBA-Change 00:13:40.079 Unknown (1Dh): Supported LBA-Change 00:13:40.079 00:13:40.079 Error Log 00:13:40.079 ========= 00:13:40.079 00:13:40.079 Arbitration 00:13:40.079 =========== 00:13:40.079 Arbitration Burst: no limit 00:13:40.079 00:13:40.079 Power Management 00:13:40.079 ================ 00:13:40.079 Number of Power States: 1 00:13:40.079 Current Power State: Power State #0 00:13:40.079 Power State #0: 00:13:40.079 Max Power: 25.00 W 00:13:40.079 Non-Operational State: Operational 00:13:40.079 Entry Latency: 16 microseconds 00:13:40.079 Exit Latency: 4 microseconds 00:13:40.079 Relative Read Throughput: 0 00:13:40.079 Relative Read Latency: 0 00:13:40.079 Relative Write Throughput: 0 00:13:40.079 Relative Write Latency: 0 00:13:40.079 Idle Power: Not Reported 00:13:40.079 Active Power: Not Reported 00:13:40.079 Non-Operational Permissive Mode: Not Supported 00:13:40.079 00:13:40.079 Health Information 00:13:40.079 ================== 00:13:40.079 Critical Warnings: 00:13:40.079 Available Spare Space: OK 00:13:40.079 Temperature: OK 00:13:40.079 Device Reliability: OK 00:13:40.079 Read Only: No 00:13:40.079 Volatile Memory Backup: OK 00:13:40.079 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.079 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.079 Available Spare: 0% 00:13:40.079 Available Spare Threshold: 0% 00:13:40.079 Life Percentage Used: 0% 00:13:40.079 Data Units Read: 1921 00:13:40.079 Data Units Written: 1708 00:13:40.079 Host Read Commands: 97448 00:13:40.079 Host Write Commands: 95717 00:13:40.079 Controller Busy Time: 0 minutes 00:13:40.079 Power Cycles: 0 00:13:40.079 Power On Hours: 0 hours 00:13:40.079 Unsafe Shutdowns: 0 00:13:40.079 Unrecoverable Media Errors: 0 00:13:40.079 Lifetime Error Log Entries: 0 00:13:40.079 Warning Temperature Time: 0 minutes 00:13:40.079 Critical Temperature Time: 0 minutes 00:13:40.079 00:13:40.079 Number of Queues 00:13:40.079 ================ 00:13:40.079 Number of I/O Submission Queues: 64 00:13:40.079 Number of I/O Completion Queues: 64 00:13:40.079 00:13:40.079 ZNS Specific Controller Data 00:13:40.079 ============================ 00:13:40.079 Zone Append Size Limit: 0 00:13:40.079 00:13:40.079 00:13:40.079 Active Namespaces 00:13:40.079 ================= 00:13:40.079 Namespace ID:1 00:13:40.079 Error Recovery Timeout: Unlimited 00:13:40.079 Command Set Identifier: NVM (00h) 00:13:40.079 Deallocate: Supported 00:13:40.079 Deallocated/Unwritten Error: Supported 00:13:40.079 Deallocated Read Value: All 0x00 00:13:40.079 Deallocate in Write Zeroes: Not Supported 00:13:40.079 Deallocated Guard Field: 0xFFFF 00:13:40.079 Flush: Supported 00:13:40.079 Reservation: Not Supported 00:13:40.079 Namespace Sharing Capabilities: Private 00:13:40.079 Size (in LBAs): 1048576 (4GiB) 00:13:40.079 Capacity (in LBAs): 1048576 (4GiB) 00:13:40.079 Utilization (in LBAs): 1048576 (4GiB) 00:13:40.079 Thin Provisioning: Not Supported 00:13:40.079 Per-NS Atomic Units: No 00:13:40.079 Maximum Single Source Range Length: 128 00:13:40.079 Maximum Copy Length: 128 00:13:40.079 Maximum Source Range Count: 128 00:13:40.079 NGUID/EUI64 Never Reused: No 00:13:40.079 Namespace Write Protected: No 00:13:40.079 Number of LBA Formats: 8 00:13:40.079 Current LBA Format: LBA Format #04 00:13:40.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.080 00:13:40.080 NVM Specific Namespace Data 00:13:40.080 =========================== 00:13:40.080 Logical Block Storage Tag Mask: 0 00:13:40.080 Protection Information Capabilities: 00:13:40.080 16b Guard Protection Information Storage Tag Support: No 00:13:40.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.080 Storage Tag Check Read Support: No 00:13:40.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Namespace ID:2 00:13:40.080 Error Recovery Timeout: Unlimited 00:13:40.080 Command Set Identifier: NVM (00h) 00:13:40.080 Deallocate: Supported 00:13:40.080 Deallocated/Unwritten Error: Supported 00:13:40.080 Deallocated Read Value: All 0x00 00:13:40.080 Deallocate in Write Zeroes: Not Supported 00:13:40.080 Deallocated Guard Field: 0xFFFF 00:13:40.080 Flush: Supported 00:13:40.080 Reservation: Not Supported 00:13:40.080 Namespace Sharing Capabilities: Private 00:13:40.080 Size (in LBAs): 1048576 (4GiB) 00:13:40.080 Capacity (in LBAs): 1048576 (4GiB) 00:13:40.080 Utilization (in LBAs): 1048576 (4GiB) 00:13:40.080 Thin Provisioning: Not Supported 00:13:40.080 Per-NS Atomic Units: No 00:13:40.080 Maximum Single Source Range Length: 128 00:13:40.080 Maximum Copy Length: 128 00:13:40.080 Maximum Source Range Count: 128 00:13:40.080 NGUID/EUI64 Never Reused: No 00:13:40.080 Namespace Write Protected: No 00:13:40.080 Number of LBA Formats: 8 00:13:40.080 Current LBA Format: LBA Format #04 00:13:40.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.080 00:13:40.080 NVM Specific Namespace Data 00:13:40.080 =========================== 00:13:40.080 Logical Block Storage Tag Mask: 0 00:13:40.080 Protection Information Capabilities: 00:13:40.080 16b Guard Protection Information Storage Tag Support: No 00:13:40.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.080 Storage Tag Check Read Support: No 00:13:40.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Namespace ID:3 00:13:40.080 Error Recovery Timeout: Unlimited 00:13:40.080 Command Set Identifier: NVM (00h) 00:13:40.080 Deallocate: Supported 00:13:40.080 Deallocated/Unwritten Error: Supported 00:13:40.080 Deallocated Read Value: All 0x00 00:13:40.080 Deallocate in Write Zeroes: Not Supported 00:13:40.080 Deallocated Guard Field: 0xFFFF 00:13:40.080 Flush: Supported 00:13:40.080 Reservation: Not Supported 00:13:40.080 Namespace Sharing Capabilities: Private 00:13:40.080 Size (in LBAs): 1048576 (4GiB) 00:13:40.080 Capacity (in LBAs): 1048576 (4GiB) 00:13:40.080 Utilization (in LBAs): 1048576 (4GiB) 00:13:40.080 Thin Provisioning: Not Supported 00:13:40.080 Per-NS Atomic Units: No 00:13:40.080 Maximum Single Source Range Length: 128 00:13:40.080 Maximum Copy Length: 128 00:13:40.080 Maximum Source Range Count: 128 00:13:40.080 NGUID/EUI64 Never Reused: No 00:13:40.080 Namespace Write Protected: No 00:13:40.080 Number of LBA Formats: 8 00:13:40.080 Current LBA Format: LBA Format #04 00:13:40.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.080 00:13:40.080 NVM Specific Namespace Data 00:13:40.080 =========================== 00:13:40.080 Logical Block Storage Tag Mask: 0 00:13:40.080 Protection Information Capabilities: 00:13:40.080 16b Guard Protection Information Storage Tag Support: No 00:13:40.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.080 Storage Tag Check Read Support: No 00:13:40.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.080 13:39:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:40.080 13:39:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:13:40.340 ===================================================== 00:13:40.340 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:40.340 ===================================================== 00:13:40.340 Controller Capabilities/Features 00:13:40.340 ================================ 00:13:40.340 Vendor ID: 1b36 00:13:40.340 Subsystem Vendor ID: 1af4 00:13:40.340 Serial Number: 12340 00:13:40.340 Model Number: QEMU NVMe Ctrl 00:13:40.340 Firmware Version: 8.0.0 00:13:40.340 Recommended Arb Burst: 6 00:13:40.340 IEEE OUI Identifier: 00 54 52 00:13:40.340 Multi-path I/O 00:13:40.340 May have multiple subsystem ports: No 00:13:40.340 May have multiple controllers: No 00:13:40.340 Associated with SR-IOV VF: No 00:13:40.340 Max Data Transfer Size: 524288 00:13:40.340 Max Number of Namespaces: 256 00:13:40.340 Max Number of I/O Queues: 64 00:13:40.340 NVMe Specification Version (VS): 1.4 00:13:40.340 NVMe Specification Version (Identify): 1.4 00:13:40.340 Maximum Queue Entries: 2048 00:13:40.340 Contiguous Queues Required: Yes 00:13:40.340 Arbitration Mechanisms Supported 00:13:40.340 Weighted Round Robin: Not Supported 00:13:40.340 Vendor Specific: Not Supported 00:13:40.340 Reset Timeout: 7500 ms 00:13:40.340 Doorbell Stride: 4 bytes 00:13:40.340 NVM Subsystem Reset: Not Supported 00:13:40.340 Command Sets Supported 00:13:40.340 NVM Command Set: Supported 00:13:40.340 Boot Partition: Not Supported 00:13:40.340 Memory Page Size Minimum: 4096 bytes 00:13:40.340 Memory Page Size Maximum: 65536 bytes 00:13:40.340 Persistent Memory Region: Not Supported 00:13:40.340 Optional Asynchronous Events Supported 00:13:40.340 Namespace Attribute Notices: Supported 00:13:40.340 Firmware Activation Notices: Not Supported 00:13:40.340 ANA Change Notices: Not Supported 00:13:40.340 PLE Aggregate Log Change Notices: Not Supported 00:13:40.340 LBA Status Info Alert Notices: Not Supported 00:13:40.340 EGE Aggregate Log Change Notices: Not Supported 00:13:40.340 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.340 Zone Descriptor Change Notices: Not Supported 00:13:40.340 Discovery Log Change Notices: Not Supported 00:13:40.340 Controller Attributes 00:13:40.340 128-bit Host Identifier: Not Supported 00:13:40.340 Non-Operational Permissive Mode: Not Supported 00:13:40.340 NVM Sets: Not Supported 00:13:40.340 Read Recovery Levels: Not Supported 00:13:40.340 Endurance Groups: Not Supported 00:13:40.340 Predictable Latency Mode: Not Supported 00:13:40.340 Traffic Based Keep ALive: Not Supported 00:13:40.340 Namespace Granularity: Not Supported 00:13:40.340 SQ Associations: Not Supported 00:13:40.340 UUID List: Not Supported 00:13:40.340 Multi-Domain Subsystem: Not Supported 00:13:40.340 Fixed Capacity Management: Not Supported 00:13:40.340 Variable Capacity Management: Not Supported 00:13:40.340 Delete Endurance Group: Not Supported 00:13:40.340 Delete NVM Set: Not Supported 00:13:40.340 Extended LBA Formats Supported: Supported 00:13:40.340 Flexible Data Placement Supported: Not Supported 00:13:40.340 00:13:40.340 Controller Memory Buffer Support 00:13:40.340 ================================ 00:13:40.340 Supported: No 00:13:40.340 00:13:40.340 Persistent Memory Region Support 00:13:40.340 ================================ 00:13:40.340 Supported: No 00:13:40.340 00:13:40.340 Admin Command Set Attributes 00:13:40.340 ============================ 00:13:40.340 Security Send/Receive: Not Supported 00:13:40.340 Format NVM: Supported 00:13:40.340 Firmware Activate/Download: Not Supported 00:13:40.340 Namespace Management: Supported 00:13:40.340 Device Self-Test: Not Supported 00:13:40.340 Directives: Supported 00:13:40.340 NVMe-MI: Not Supported 00:13:40.340 Virtualization Management: Not Supported 00:13:40.340 Doorbell Buffer Config: Supported 00:13:40.340 Get LBA Status Capability: Not Supported 00:13:40.340 Command & Feature Lockdown Capability: Not Supported 00:13:40.340 Abort Command Limit: 4 00:13:40.340 Async Event Request Limit: 4 00:13:40.340 Number of Firmware Slots: N/A 00:13:40.340 Firmware Slot 1 Read-Only: N/A 00:13:40.340 Firmware Activation Without Reset: N/A 00:13:40.340 Multiple Update Detection Support: N/A 00:13:40.340 Firmware Update Granularity: No Information Provided 00:13:40.340 Per-Namespace SMART Log: Yes 00:13:40.340 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.340 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:40.340 Command Effects Log Page: Supported 00:13:40.340 Get Log Page Extended Data: Supported 00:13:40.340 Telemetry Log Pages: Not Supported 00:13:40.340 Persistent Event Log Pages: Not Supported 00:13:40.340 Supported Log Pages Log Page: May Support 00:13:40.340 Commands Supported & Effects Log Page: Not Supported 00:13:40.340 Feature Identifiers & Effects Log Page:May Support 00:13:40.340 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.340 Data Area 4 for Telemetry Log: Not Supported 00:13:40.340 Error Log Page Entries Supported: 1 00:13:40.340 Keep Alive: Not Supported 00:13:40.340 00:13:40.340 NVM Command Set Attributes 00:13:40.340 ========================== 00:13:40.340 Submission Queue Entry Size 00:13:40.340 Max: 64 00:13:40.340 Min: 64 00:13:40.340 Completion Queue Entry Size 00:13:40.340 Max: 16 00:13:40.340 Min: 16 00:13:40.340 Number of Namespaces: 256 00:13:40.340 Compare Command: Supported 00:13:40.340 Write Uncorrectable Command: Not Supported 00:13:40.340 Dataset Management Command: Supported 00:13:40.340 Write Zeroes Command: Supported 00:13:40.340 Set Features Save Field: Supported 00:13:40.340 Reservations: Not Supported 00:13:40.340 Timestamp: Supported 00:13:40.340 Copy: Supported 00:13:40.340 Volatile Write Cache: Present 00:13:40.340 Atomic Write Unit (Normal): 1 00:13:40.340 Atomic Write Unit (PFail): 1 00:13:40.340 Atomic Compare & Write Unit: 1 00:13:40.340 Fused Compare & Write: Not Supported 00:13:40.340 Scatter-Gather List 00:13:40.340 SGL Command Set: Supported 00:13:40.340 SGL Keyed: Not Supported 00:13:40.340 SGL Bit Bucket Descriptor: Not Supported 00:13:40.340 SGL Metadata Pointer: Not Supported 00:13:40.340 Oversized SGL: Not Supported 00:13:40.340 SGL Metadata Address: Not Supported 00:13:40.340 SGL Offset: Not Supported 00:13:40.340 Transport SGL Data Block: Not Supported 00:13:40.340 Replay Protected Memory Block: Not Supported 00:13:40.340 00:13:40.340 Firmware Slot Information 00:13:40.340 ========================= 00:13:40.340 Active slot: 1 00:13:40.340 Slot 1 Firmware Revision: 1.0 00:13:40.340 00:13:40.340 00:13:40.340 Commands Supported and Effects 00:13:40.340 ============================== 00:13:40.340 Admin Commands 00:13:40.340 -------------- 00:13:40.340 Delete I/O Submission Queue (00h): Supported 00:13:40.340 Create I/O Submission Queue (01h): Supported 00:13:40.340 Get Log Page (02h): Supported 00:13:40.340 Delete I/O Completion Queue (04h): Supported 00:13:40.340 Create I/O Completion Queue (05h): Supported 00:13:40.340 Identify (06h): Supported 00:13:40.340 Abort (08h): Supported 00:13:40.340 Set Features (09h): Supported 00:13:40.340 Get Features (0Ah): Supported 00:13:40.340 Asynchronous Event Request (0Ch): Supported 00:13:40.340 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.340 Directive Send (19h): Supported 00:13:40.340 Directive Receive (1Ah): Supported 00:13:40.340 Virtualization Management (1Ch): Supported 00:13:40.340 Doorbell Buffer Config (7Ch): Supported 00:13:40.340 Format NVM (80h): Supported LBA-Change 00:13:40.340 I/O Commands 00:13:40.340 ------------ 00:13:40.340 Flush (00h): Supported LBA-Change 00:13:40.340 Write (01h): Supported LBA-Change 00:13:40.340 Read (02h): Supported 00:13:40.340 Compare (05h): Supported 00:13:40.340 Write Zeroes (08h): Supported LBA-Change 00:13:40.340 Dataset Management (09h): Supported LBA-Change 00:13:40.340 Unknown (0Ch): Supported 00:13:40.340 Unknown (12h): Supported 00:13:40.340 Copy (19h): Supported LBA-Change 00:13:40.341 Unknown (1Dh): Supported LBA-Change 00:13:40.341 00:13:40.341 Error Log 00:13:40.341 ========= 00:13:40.341 00:13:40.341 Arbitration 00:13:40.341 =========== 00:13:40.341 Arbitration Burst: no limit 00:13:40.341 00:13:40.341 Power Management 00:13:40.341 ================ 00:13:40.341 Number of Power States: 1 00:13:40.341 Current Power State: Power State #0 00:13:40.341 Power State #0: 00:13:40.341 Max Power: 25.00 W 00:13:40.341 Non-Operational State: Operational 00:13:40.341 Entry Latency: 16 microseconds 00:13:40.341 Exit Latency: 4 microseconds 00:13:40.341 Relative Read Throughput: 0 00:13:40.341 Relative Read Latency: 0 00:13:40.341 Relative Write Throughput: 0 00:13:40.341 Relative Write Latency: 0 00:13:40.341 Idle Power: Not Reported 00:13:40.341 Active Power: Not Reported 00:13:40.341 Non-Operational Permissive Mode: Not Supported 00:13:40.341 00:13:40.341 Health Information 00:13:40.341 ================== 00:13:40.341 Critical Warnings: 00:13:40.341 Available Spare Space: OK 00:13:40.341 Temperature: OK 00:13:40.341 Device Reliability: OK 00:13:40.341 Read Only: No 00:13:40.341 Volatile Memory Backup: OK 00:13:40.341 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.341 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.341 Available Spare: 0% 00:13:40.341 Available Spare Threshold: 0% 00:13:40.341 Life Percentage Used: 0% 00:13:40.341 Data Units Read: 616 00:13:40.341 Data Units Written: 544 00:13:40.341 Host Read Commands: 32070 00:13:40.341 Host Write Commands: 31856 00:13:40.341 Controller Busy Time: 0 minutes 00:13:40.341 Power Cycles: 0 00:13:40.341 Power On Hours: 0 hours 00:13:40.341 Unsafe Shutdowns: 0 00:13:40.341 Unrecoverable Media Errors: 0 00:13:40.341 Lifetime Error Log Entries: 0 00:13:40.341 Warning Temperature Time: 0 minutes 00:13:40.341 Critical Temperature Time: 0 minutes 00:13:40.341 00:13:40.341 Number of Queues 00:13:40.341 ================ 00:13:40.341 Number of I/O Submission Queues: 64 00:13:40.341 Number of I/O Completion Queues: 64 00:13:40.341 00:13:40.341 ZNS Specific Controller Data 00:13:40.341 ============================ 00:13:40.341 Zone Append Size Limit: 0 00:13:40.341 00:13:40.341 00:13:40.341 Active Namespaces 00:13:40.341 ================= 00:13:40.341 Namespace ID:1 00:13:40.341 Error Recovery Timeout: Unlimited 00:13:40.341 Command Set Identifier: NVM (00h) 00:13:40.341 Deallocate: Supported 00:13:40.341 Deallocated/Unwritten Error: Supported 00:13:40.341 Deallocated Read Value: All 0x00 00:13:40.341 Deallocate in Write Zeroes: Not Supported 00:13:40.341 Deallocated Guard Field: 0xFFFF 00:13:40.341 Flush: Supported 00:13:40.341 Reservation: Not Supported 00:13:40.341 Metadata Transferred as: Separate Metadata Buffer 00:13:40.341 Namespace Sharing Capabilities: Private 00:13:40.341 Size (in LBAs): 1548666 (5GiB) 00:13:40.341 Capacity (in LBAs): 1548666 (5GiB) 00:13:40.341 Utilization (in LBAs): 1548666 (5GiB) 00:13:40.341 Thin Provisioning: Not Supported 00:13:40.341 Per-NS Atomic Units: No 00:13:40.341 Maximum Single Source Range Length: 128 00:13:40.341 Maximum Copy Length: 128 00:13:40.341 Maximum Source Range Count: 128 00:13:40.341 NGUID/EUI64 Never Reused: No 00:13:40.341 Namespace Write Protected: No 00:13:40.341 Number of LBA Formats: 8 00:13:40.341 Current LBA Format: LBA Format #07 00:13:40.341 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.341 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.341 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.341 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.341 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.341 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.341 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.341 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.341 00:13:40.341 NVM Specific Namespace Data 00:13:40.341 =========================== 00:13:40.341 Logical Block Storage Tag Mask: 0 00:13:40.341 Protection Information Capabilities: 00:13:40.341 16b Guard Protection Information Storage Tag Support: No 00:13:40.341 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.341 Storage Tag Check Read Support: No 00:13:40.341 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.341 13:39:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:40.341 13:39:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:13:40.600 ===================================================== 00:13:40.600 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:40.600 ===================================================== 00:13:40.600 Controller Capabilities/Features 00:13:40.600 ================================ 00:13:40.600 Vendor ID: 1b36 00:13:40.600 Subsystem Vendor ID: 1af4 00:13:40.600 Serial Number: 12341 00:13:40.600 Model Number: QEMU NVMe Ctrl 00:13:40.600 Firmware Version: 8.0.0 00:13:40.600 Recommended Arb Burst: 6 00:13:40.600 IEEE OUI Identifier: 00 54 52 00:13:40.600 Multi-path I/O 00:13:40.600 May have multiple subsystem ports: No 00:13:40.600 May have multiple controllers: No 00:13:40.600 Associated with SR-IOV VF: No 00:13:40.600 Max Data Transfer Size: 524288 00:13:40.600 Max Number of Namespaces: 256 00:13:40.600 Max Number of I/O Queues: 64 00:13:40.600 NVMe Specification Version (VS): 1.4 00:13:40.600 NVMe Specification Version (Identify): 1.4 00:13:40.600 Maximum Queue Entries: 2048 00:13:40.600 Contiguous Queues Required: Yes 00:13:40.600 Arbitration Mechanisms Supported 00:13:40.600 Weighted Round Robin: Not Supported 00:13:40.600 Vendor Specific: Not Supported 00:13:40.600 Reset Timeout: 7500 ms 00:13:40.600 Doorbell Stride: 4 bytes 00:13:40.600 NVM Subsystem Reset: Not Supported 00:13:40.600 Command Sets Supported 00:13:40.600 NVM Command Set: Supported 00:13:40.600 Boot Partition: Not Supported 00:13:40.600 Memory Page Size Minimum: 4096 bytes 00:13:40.600 Memory Page Size Maximum: 65536 bytes 00:13:40.600 Persistent Memory Region: Not Supported 00:13:40.600 Optional Asynchronous Events Supported 00:13:40.600 Namespace Attribute Notices: Supported 00:13:40.600 Firmware Activation Notices: Not Supported 00:13:40.600 ANA Change Notices: Not Supported 00:13:40.600 PLE Aggregate Log Change Notices: Not Supported 00:13:40.600 LBA Status Info Alert Notices: Not Supported 00:13:40.600 EGE Aggregate Log Change Notices: Not Supported 00:13:40.600 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.600 Zone Descriptor Change Notices: Not Supported 00:13:40.600 Discovery Log Change Notices: Not Supported 00:13:40.600 Controller Attributes 00:13:40.600 128-bit Host Identifier: Not Supported 00:13:40.600 Non-Operational Permissive Mode: Not Supported 00:13:40.600 NVM Sets: Not Supported 00:13:40.600 Read Recovery Levels: Not Supported 00:13:40.600 Endurance Groups: Not Supported 00:13:40.600 Predictable Latency Mode: Not Supported 00:13:40.600 Traffic Based Keep ALive: Not Supported 00:13:40.600 Namespace Granularity: Not Supported 00:13:40.600 SQ Associations: Not Supported 00:13:40.600 UUID List: Not Supported 00:13:40.600 Multi-Domain Subsystem: Not Supported 00:13:40.600 Fixed Capacity Management: Not Supported 00:13:40.600 Variable Capacity Management: Not Supported 00:13:40.600 Delete Endurance Group: Not Supported 00:13:40.600 Delete NVM Set: Not Supported 00:13:40.600 Extended LBA Formats Supported: Supported 00:13:40.600 Flexible Data Placement Supported: Not Supported 00:13:40.600 00:13:40.600 Controller Memory Buffer Support 00:13:40.600 ================================ 00:13:40.600 Supported: No 00:13:40.600 00:13:40.600 Persistent Memory Region Support 00:13:40.601 ================================ 00:13:40.601 Supported: No 00:13:40.601 00:13:40.601 Admin Command Set Attributes 00:13:40.601 ============================ 00:13:40.601 Security Send/Receive: Not Supported 00:13:40.601 Format NVM: Supported 00:13:40.601 Firmware Activate/Download: Not Supported 00:13:40.601 Namespace Management: Supported 00:13:40.601 Device Self-Test: Not Supported 00:13:40.601 Directives: Supported 00:13:40.601 NVMe-MI: Not Supported 00:13:40.601 Virtualization Management: Not Supported 00:13:40.601 Doorbell Buffer Config: Supported 00:13:40.601 Get LBA Status Capability: Not Supported 00:13:40.601 Command & Feature Lockdown Capability: Not Supported 00:13:40.601 Abort Command Limit: 4 00:13:40.601 Async Event Request Limit: 4 00:13:40.601 Number of Firmware Slots: N/A 00:13:40.601 Firmware Slot 1 Read-Only: N/A 00:13:40.601 Firmware Activation Without Reset: N/A 00:13:40.601 Multiple Update Detection Support: N/A 00:13:40.601 Firmware Update Granularity: No Information Provided 00:13:40.601 Per-Namespace SMART Log: Yes 00:13:40.601 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.601 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:40.601 Command Effects Log Page: Supported 00:13:40.601 Get Log Page Extended Data: Supported 00:13:40.601 Telemetry Log Pages: Not Supported 00:13:40.601 Persistent Event Log Pages: Not Supported 00:13:40.601 Supported Log Pages Log Page: May Support 00:13:40.601 Commands Supported & Effects Log Page: Not Supported 00:13:40.601 Feature Identifiers & Effects Log Page:May Support 00:13:40.601 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.601 Data Area 4 for Telemetry Log: Not Supported 00:13:40.601 Error Log Page Entries Supported: 1 00:13:40.601 Keep Alive: Not Supported 00:13:40.601 00:13:40.601 NVM Command Set Attributes 00:13:40.601 ========================== 00:13:40.601 Submission Queue Entry Size 00:13:40.601 Max: 64 00:13:40.601 Min: 64 00:13:40.601 Completion Queue Entry Size 00:13:40.601 Max: 16 00:13:40.601 Min: 16 00:13:40.601 Number of Namespaces: 256 00:13:40.601 Compare Command: Supported 00:13:40.601 Write Uncorrectable Command: Not Supported 00:13:40.601 Dataset Management Command: Supported 00:13:40.601 Write Zeroes Command: Supported 00:13:40.601 Set Features Save Field: Supported 00:13:40.601 Reservations: Not Supported 00:13:40.601 Timestamp: Supported 00:13:40.601 Copy: Supported 00:13:40.601 Volatile Write Cache: Present 00:13:40.601 Atomic Write Unit (Normal): 1 00:13:40.601 Atomic Write Unit (PFail): 1 00:13:40.601 Atomic Compare & Write Unit: 1 00:13:40.601 Fused Compare & Write: Not Supported 00:13:40.601 Scatter-Gather List 00:13:40.601 SGL Command Set: Supported 00:13:40.601 SGL Keyed: Not Supported 00:13:40.601 SGL Bit Bucket Descriptor: Not Supported 00:13:40.601 SGL Metadata Pointer: Not Supported 00:13:40.601 Oversized SGL: Not Supported 00:13:40.601 SGL Metadata Address: Not Supported 00:13:40.601 SGL Offset: Not Supported 00:13:40.601 Transport SGL Data Block: Not Supported 00:13:40.601 Replay Protected Memory Block: Not Supported 00:13:40.601 00:13:40.601 Firmware Slot Information 00:13:40.601 ========================= 00:13:40.601 Active slot: 1 00:13:40.601 Slot 1 Firmware Revision: 1.0 00:13:40.601 00:13:40.601 00:13:40.601 Commands Supported and Effects 00:13:40.601 ============================== 00:13:40.601 Admin Commands 00:13:40.601 -------------- 00:13:40.601 Delete I/O Submission Queue (00h): Supported 00:13:40.601 Create I/O Submission Queue (01h): Supported 00:13:40.601 Get Log Page (02h): Supported 00:13:40.601 Delete I/O Completion Queue (04h): Supported 00:13:40.601 Create I/O Completion Queue (05h): Supported 00:13:40.601 Identify (06h): Supported 00:13:40.601 Abort (08h): Supported 00:13:40.601 Set Features (09h): Supported 00:13:40.601 Get Features (0Ah): Supported 00:13:40.601 Asynchronous Event Request (0Ch): Supported 00:13:40.601 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:40.601 Directive Send (19h): Supported 00:13:40.601 Directive Receive (1Ah): Supported 00:13:40.601 Virtualization Management (1Ch): Supported 00:13:40.601 Doorbell Buffer Config (7Ch): Supported 00:13:40.601 Format NVM (80h): Supported LBA-Change 00:13:40.601 I/O Commands 00:13:40.601 ------------ 00:13:40.601 Flush (00h): Supported LBA-Change 00:13:40.601 Write (01h): Supported LBA-Change 00:13:40.601 Read (02h): Supported 00:13:40.601 Compare (05h): Supported 00:13:40.601 Write Zeroes (08h): Supported LBA-Change 00:13:40.601 Dataset Management (09h): Supported LBA-Change 00:13:40.601 Unknown (0Ch): Supported 00:13:40.601 Unknown (12h): Supported 00:13:40.601 Copy (19h): Supported LBA-Change 00:13:40.601 Unknown (1Dh): Supported LBA-Change 00:13:40.601 00:13:40.601 Error Log 00:13:40.601 ========= 00:13:40.601 00:13:40.601 Arbitration 00:13:40.601 =========== 00:13:40.601 Arbitration Burst: no limit 00:13:40.601 00:13:40.601 Power Management 00:13:40.601 ================ 00:13:40.601 Number of Power States: 1 00:13:40.601 Current Power State: Power State #0 00:13:40.601 Power State #0: 00:13:40.601 Max Power: 25.00 W 00:13:40.601 Non-Operational State: Operational 00:13:40.601 Entry Latency: 16 microseconds 00:13:40.601 Exit Latency: 4 microseconds 00:13:40.601 Relative Read Throughput: 0 00:13:40.601 Relative Read Latency: 0 00:13:40.601 Relative Write Throughput: 0 00:13:40.601 Relative Write Latency: 0 00:13:40.860 Idle Power: Not Reported 00:13:40.860 Active Power: Not Reported 00:13:40.860 Non-Operational Permissive Mode: Not Supported 00:13:40.860 00:13:40.860 Health Information 00:13:40.860 ================== 00:13:40.860 Critical Warnings: 00:13:40.860 Available Spare Space: OK 00:13:40.860 Temperature: OK 00:13:40.860 Device Reliability: OK 00:13:40.860 Read Only: No 00:13:40.860 Volatile Memory Backup: OK 00:13:40.860 Current Temperature: 323 Kelvin (50 Celsius) 00:13:40.860 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:40.860 Available Spare: 0% 00:13:40.860 Available Spare Threshold: 0% 00:13:40.860 Life Percentage Used: 0% 00:13:40.860 Data Units Read: 975 00:13:40.860 Data Units Written: 843 00:13:40.860 Host Read Commands: 47933 00:13:40.860 Host Write Commands: 46734 00:13:40.860 Controller Busy Time: 0 minutes 00:13:40.860 Power Cycles: 0 00:13:40.860 Power On Hours: 0 hours 00:13:40.860 Unsafe Shutdowns: 0 00:13:40.860 Unrecoverable Media Errors: 0 00:13:40.860 Lifetime Error Log Entries: 0 00:13:40.860 Warning Temperature Time: 0 minutes 00:13:40.860 Critical Temperature Time: 0 minutes 00:13:40.860 00:13:40.860 Number of Queues 00:13:40.860 ================ 00:13:40.860 Number of I/O Submission Queues: 64 00:13:40.860 Number of I/O Completion Queues: 64 00:13:40.860 00:13:40.860 ZNS Specific Controller Data 00:13:40.860 ============================ 00:13:40.860 Zone Append Size Limit: 0 00:13:40.860 00:13:40.860 00:13:40.860 Active Namespaces 00:13:40.860 ================= 00:13:40.860 Namespace ID:1 00:13:40.860 Error Recovery Timeout: Unlimited 00:13:40.860 Command Set Identifier: NVM (00h) 00:13:40.860 Deallocate: Supported 00:13:40.860 Deallocated/Unwritten Error: Supported 00:13:40.860 Deallocated Read Value: All 0x00 00:13:40.860 Deallocate in Write Zeroes: Not Supported 00:13:40.860 Deallocated Guard Field: 0xFFFF 00:13:40.860 Flush: Supported 00:13:40.860 Reservation: Not Supported 00:13:40.860 Namespace Sharing Capabilities: Private 00:13:40.860 Size (in LBAs): 1310720 (5GiB) 00:13:40.860 Capacity (in LBAs): 1310720 (5GiB) 00:13:40.860 Utilization (in LBAs): 1310720 (5GiB) 00:13:40.860 Thin Provisioning: Not Supported 00:13:40.860 Per-NS Atomic Units: No 00:13:40.860 Maximum Single Source Range Length: 128 00:13:40.860 Maximum Copy Length: 128 00:13:40.860 Maximum Source Range Count: 128 00:13:40.860 NGUID/EUI64 Never Reused: No 00:13:40.860 Namespace Write Protected: No 00:13:40.860 Number of LBA Formats: 8 00:13:40.860 Current LBA Format: LBA Format #04 00:13:40.860 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.860 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:40.860 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:40.860 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:40.860 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:40.860 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:40.860 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:40.860 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:40.860 00:13:40.860 NVM Specific Namespace Data 00:13:40.860 =========================== 00:13:40.860 Logical Block Storage Tag Mask: 0 00:13:40.860 Protection Information Capabilities: 00:13:40.860 16b Guard Protection Information Storage Tag Support: No 00:13:40.860 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:40.860 Storage Tag Check Read Support: No 00:13:40.861 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:40.861 13:39:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:40.861 13:39:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:13:41.120 ===================================================== 00:13:41.120 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:41.120 ===================================================== 00:13:41.120 Controller Capabilities/Features 00:13:41.120 ================================ 00:13:41.120 Vendor ID: 1b36 00:13:41.120 Subsystem Vendor ID: 1af4 00:13:41.120 Serial Number: 12342 00:13:41.120 Model Number: QEMU NVMe Ctrl 00:13:41.120 Firmware Version: 8.0.0 00:13:41.120 Recommended Arb Burst: 6 00:13:41.120 IEEE OUI Identifier: 00 54 52 00:13:41.120 Multi-path I/O 00:13:41.120 May have multiple subsystem ports: No 00:13:41.120 May have multiple controllers: No 00:13:41.120 Associated with SR-IOV VF: No 00:13:41.120 Max Data Transfer Size: 524288 00:13:41.120 Max Number of Namespaces: 256 00:13:41.120 Max Number of I/O Queues: 64 00:13:41.120 NVMe Specification Version (VS): 1.4 00:13:41.120 NVMe Specification Version (Identify): 1.4 00:13:41.120 Maximum Queue Entries: 2048 00:13:41.120 Contiguous Queues Required: Yes 00:13:41.120 Arbitration Mechanisms Supported 00:13:41.120 Weighted Round Robin: Not Supported 00:13:41.120 Vendor Specific: Not Supported 00:13:41.120 Reset Timeout: 7500 ms 00:13:41.120 Doorbell Stride: 4 bytes 00:13:41.120 NVM Subsystem Reset: Not Supported 00:13:41.120 Command Sets Supported 00:13:41.120 NVM Command Set: Supported 00:13:41.120 Boot Partition: Not Supported 00:13:41.120 Memory Page Size Minimum: 4096 bytes 00:13:41.120 Memory Page Size Maximum: 65536 bytes 00:13:41.120 Persistent Memory Region: Not Supported 00:13:41.120 Optional Asynchronous Events Supported 00:13:41.120 Namespace Attribute Notices: Supported 00:13:41.120 Firmware Activation Notices: Not Supported 00:13:41.120 ANA Change Notices: Not Supported 00:13:41.120 PLE Aggregate Log Change Notices: Not Supported 00:13:41.120 LBA Status Info Alert Notices: Not Supported 00:13:41.120 EGE Aggregate Log Change Notices: Not Supported 00:13:41.120 Normal NVM Subsystem Shutdown event: Not Supported 00:13:41.120 Zone Descriptor Change Notices: Not Supported 00:13:41.120 Discovery Log Change Notices: Not Supported 00:13:41.120 Controller Attributes 00:13:41.120 128-bit Host Identifier: Not Supported 00:13:41.120 Non-Operational Permissive Mode: Not Supported 00:13:41.120 NVM Sets: Not Supported 00:13:41.120 Read Recovery Levels: Not Supported 00:13:41.120 Endurance Groups: Not Supported 00:13:41.120 Predictable Latency Mode: Not Supported 00:13:41.120 Traffic Based Keep ALive: Not Supported 00:13:41.120 Namespace Granularity: Not Supported 00:13:41.120 SQ Associations: Not Supported 00:13:41.120 UUID List: Not Supported 00:13:41.120 Multi-Domain Subsystem: Not Supported 00:13:41.120 Fixed Capacity Management: Not Supported 00:13:41.120 Variable Capacity Management: Not Supported 00:13:41.120 Delete Endurance Group: Not Supported 00:13:41.120 Delete NVM Set: Not Supported 00:13:41.120 Extended LBA Formats Supported: Supported 00:13:41.120 Flexible Data Placement Supported: Not Supported 00:13:41.120 00:13:41.120 Controller Memory Buffer Support 00:13:41.120 ================================ 00:13:41.120 Supported: No 00:13:41.120 00:13:41.120 Persistent Memory Region Support 00:13:41.120 ================================ 00:13:41.120 Supported: No 00:13:41.120 00:13:41.120 Admin Command Set Attributes 00:13:41.120 ============================ 00:13:41.120 Security Send/Receive: Not Supported 00:13:41.120 Format NVM: Supported 00:13:41.120 Firmware Activate/Download: Not Supported 00:13:41.120 Namespace Management: Supported 00:13:41.120 Device Self-Test: Not Supported 00:13:41.120 Directives: Supported 00:13:41.120 NVMe-MI: Not Supported 00:13:41.120 Virtualization Management: Not Supported 00:13:41.120 Doorbell Buffer Config: Supported 00:13:41.120 Get LBA Status Capability: Not Supported 00:13:41.120 Command & Feature Lockdown Capability: Not Supported 00:13:41.120 Abort Command Limit: 4 00:13:41.120 Async Event Request Limit: 4 00:13:41.120 Number of Firmware Slots: N/A 00:13:41.120 Firmware Slot 1 Read-Only: N/A 00:13:41.120 Firmware Activation Without Reset: N/A 00:13:41.120 Multiple Update Detection Support: N/A 00:13:41.120 Firmware Update Granularity: No Information Provided 00:13:41.120 Per-Namespace SMART Log: Yes 00:13:41.120 Asymmetric Namespace Access Log Page: Not Supported 00:13:41.120 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:41.120 Command Effects Log Page: Supported 00:13:41.120 Get Log Page Extended Data: Supported 00:13:41.120 Telemetry Log Pages: Not Supported 00:13:41.120 Persistent Event Log Pages: Not Supported 00:13:41.120 Supported Log Pages Log Page: May Support 00:13:41.120 Commands Supported & Effects Log Page: Not Supported 00:13:41.120 Feature Identifiers & Effects Log Page:May Support 00:13:41.120 NVMe-MI Commands & Effects Log Page: May Support 00:13:41.120 Data Area 4 for Telemetry Log: Not Supported 00:13:41.120 Error Log Page Entries Supported: 1 00:13:41.120 Keep Alive: Not Supported 00:13:41.120 00:13:41.120 NVM Command Set Attributes 00:13:41.120 ========================== 00:13:41.120 Submission Queue Entry Size 00:13:41.120 Max: 64 00:13:41.120 Min: 64 00:13:41.120 Completion Queue Entry Size 00:13:41.120 Max: 16 00:13:41.120 Min: 16 00:13:41.120 Number of Namespaces: 256 00:13:41.120 Compare Command: Supported 00:13:41.120 Write Uncorrectable Command: Not Supported 00:13:41.120 Dataset Management Command: Supported 00:13:41.120 Write Zeroes Command: Supported 00:13:41.120 Set Features Save Field: Supported 00:13:41.120 Reservations: Not Supported 00:13:41.120 Timestamp: Supported 00:13:41.120 Copy: Supported 00:13:41.120 Volatile Write Cache: Present 00:13:41.120 Atomic Write Unit (Normal): 1 00:13:41.120 Atomic Write Unit (PFail): 1 00:13:41.120 Atomic Compare & Write Unit: 1 00:13:41.120 Fused Compare & Write: Not Supported 00:13:41.120 Scatter-Gather List 00:13:41.120 SGL Command Set: Supported 00:13:41.120 SGL Keyed: Not Supported 00:13:41.120 SGL Bit Bucket Descriptor: Not Supported 00:13:41.120 SGL Metadata Pointer: Not Supported 00:13:41.120 Oversized SGL: Not Supported 00:13:41.120 SGL Metadata Address: Not Supported 00:13:41.120 SGL Offset: Not Supported 00:13:41.120 Transport SGL Data Block: Not Supported 00:13:41.120 Replay Protected Memory Block: Not Supported 00:13:41.120 00:13:41.120 Firmware Slot Information 00:13:41.120 ========================= 00:13:41.120 Active slot: 1 00:13:41.120 Slot 1 Firmware Revision: 1.0 00:13:41.120 00:13:41.120 00:13:41.120 Commands Supported and Effects 00:13:41.120 ============================== 00:13:41.120 Admin Commands 00:13:41.120 -------------- 00:13:41.120 Delete I/O Submission Queue (00h): Supported 00:13:41.120 Create I/O Submission Queue (01h): Supported 00:13:41.120 Get Log Page (02h): Supported 00:13:41.120 Delete I/O Completion Queue (04h): Supported 00:13:41.121 Create I/O Completion Queue (05h): Supported 00:13:41.121 Identify (06h): Supported 00:13:41.121 Abort (08h): Supported 00:13:41.121 Set Features (09h): Supported 00:13:41.121 Get Features (0Ah): Supported 00:13:41.121 Asynchronous Event Request (0Ch): Supported 00:13:41.121 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:41.121 Directive Send (19h): Supported 00:13:41.121 Directive Receive (1Ah): Supported 00:13:41.121 Virtualization Management (1Ch): Supported 00:13:41.121 Doorbell Buffer Config (7Ch): Supported 00:13:41.121 Format NVM (80h): Supported LBA-Change 00:13:41.121 I/O Commands 00:13:41.121 ------------ 00:13:41.121 Flush (00h): Supported LBA-Change 00:13:41.121 Write (01h): Supported LBA-Change 00:13:41.121 Read (02h): Supported 00:13:41.121 Compare (05h): Supported 00:13:41.121 Write Zeroes (08h): Supported LBA-Change 00:13:41.121 Dataset Management (09h): Supported LBA-Change 00:13:41.121 Unknown (0Ch): Supported 00:13:41.121 Unknown (12h): Supported 00:13:41.121 Copy (19h): Supported LBA-Change 00:13:41.121 Unknown (1Dh): Supported LBA-Change 00:13:41.121 00:13:41.121 Error Log 00:13:41.121 ========= 00:13:41.121 00:13:41.121 Arbitration 00:13:41.121 =========== 00:13:41.121 Arbitration Burst: no limit 00:13:41.121 00:13:41.121 Power Management 00:13:41.121 ================ 00:13:41.121 Number of Power States: 1 00:13:41.121 Current Power State: Power State #0 00:13:41.121 Power State #0: 00:13:41.121 Max Power: 25.00 W 00:13:41.121 Non-Operational State: Operational 00:13:41.121 Entry Latency: 16 microseconds 00:13:41.121 Exit Latency: 4 microseconds 00:13:41.121 Relative Read Throughput: 0 00:13:41.121 Relative Read Latency: 0 00:13:41.121 Relative Write Throughput: 0 00:13:41.121 Relative Write Latency: 0 00:13:41.121 Idle Power: Not Reported 00:13:41.121 Active Power: Not Reported 00:13:41.121 Non-Operational Permissive Mode: Not Supported 00:13:41.121 00:13:41.121 Health Information 00:13:41.121 ================== 00:13:41.121 Critical Warnings: 00:13:41.121 Available Spare Space: OK 00:13:41.121 Temperature: OK 00:13:41.121 Device Reliability: OK 00:13:41.121 Read Only: No 00:13:41.121 Volatile Memory Backup: OK 00:13:41.121 Current Temperature: 323 Kelvin (50 Celsius) 00:13:41.121 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:41.121 Available Spare: 0% 00:13:41.121 Available Spare Threshold: 0% 00:13:41.121 Life Percentage Used: 0% 00:13:41.121 Data Units Read: 1921 00:13:41.121 Data Units Written: 1708 00:13:41.121 Host Read Commands: 97448 00:13:41.121 Host Write Commands: 95717 00:13:41.121 Controller Busy Time: 0 minutes 00:13:41.121 Power Cycles: 0 00:13:41.121 Power On Hours: 0 hours 00:13:41.121 Unsafe Shutdowns: 0 00:13:41.121 Unrecoverable Media Errors: 0 00:13:41.121 Lifetime Error Log Entries: 0 00:13:41.121 Warning Temperature Time: 0 minutes 00:13:41.121 Critical Temperature Time: 0 minutes 00:13:41.121 00:13:41.121 Number of Queues 00:13:41.121 ================ 00:13:41.121 Number of I/O Submission Queues: 64 00:13:41.121 Number of I/O Completion Queues: 64 00:13:41.121 00:13:41.121 ZNS Specific Controller Data 00:13:41.121 ============================ 00:13:41.121 Zone Append Size Limit: 0 00:13:41.121 00:13:41.121 00:13:41.121 Active Namespaces 00:13:41.121 ================= 00:13:41.121 Namespace ID:1 00:13:41.121 Error Recovery Timeout: Unlimited 00:13:41.121 Command Set Identifier: NVM (00h) 00:13:41.121 Deallocate: Supported 00:13:41.121 Deallocated/Unwritten Error: Supported 00:13:41.121 Deallocated Read Value: All 0x00 00:13:41.121 Deallocate in Write Zeroes: Not Supported 00:13:41.121 Deallocated Guard Field: 0xFFFF 00:13:41.121 Flush: Supported 00:13:41.121 Reservation: Not Supported 00:13:41.121 Namespace Sharing Capabilities: Private 00:13:41.121 Size (in LBAs): 1048576 (4GiB) 00:13:41.121 Capacity (in LBAs): 1048576 (4GiB) 00:13:41.121 Utilization (in LBAs): 1048576 (4GiB) 00:13:41.121 Thin Provisioning: Not Supported 00:13:41.121 Per-NS Atomic Units: No 00:13:41.121 Maximum Single Source Range Length: 128 00:13:41.121 Maximum Copy Length: 128 00:13:41.121 Maximum Source Range Count: 128 00:13:41.121 NGUID/EUI64 Never Reused: No 00:13:41.121 Namespace Write Protected: No 00:13:41.121 Number of LBA Formats: 8 00:13:41.121 Current LBA Format: LBA Format #04 00:13:41.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:41.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:41.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:41.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:41.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:41.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:41.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:41.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:41.121 00:13:41.121 NVM Specific Namespace Data 00:13:41.121 =========================== 00:13:41.121 Logical Block Storage Tag Mask: 0 00:13:41.121 Protection Information Capabilities: 00:13:41.121 16b Guard Protection Information Storage Tag Support: No 00:13:41.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:41.121 Storage Tag Check Read Support: No 00:13:41.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Namespace ID:2 00:13:41.121 Error Recovery Timeout: Unlimited 00:13:41.121 Command Set Identifier: NVM (00h) 00:13:41.121 Deallocate: Supported 00:13:41.121 Deallocated/Unwritten Error: Supported 00:13:41.121 Deallocated Read Value: All 0x00 00:13:41.121 Deallocate in Write Zeroes: Not Supported 00:13:41.121 Deallocated Guard Field: 0xFFFF 00:13:41.121 Flush: Supported 00:13:41.121 Reservation: Not Supported 00:13:41.121 Namespace Sharing Capabilities: Private 00:13:41.121 Size (in LBAs): 1048576 (4GiB) 00:13:41.121 Capacity (in LBAs): 1048576 (4GiB) 00:13:41.121 Utilization (in LBAs): 1048576 (4GiB) 00:13:41.121 Thin Provisioning: Not Supported 00:13:41.121 Per-NS Atomic Units: No 00:13:41.121 Maximum Single Source Range Length: 128 00:13:41.121 Maximum Copy Length: 128 00:13:41.121 Maximum Source Range Count: 128 00:13:41.121 NGUID/EUI64 Never Reused: No 00:13:41.121 Namespace Write Protected: No 00:13:41.121 Number of LBA Formats: 8 00:13:41.121 Current LBA Format: LBA Format #04 00:13:41.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:41.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:41.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:41.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:41.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:41.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:41.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:41.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:41.121 00:13:41.121 NVM Specific Namespace Data 00:13:41.121 =========================== 00:13:41.121 Logical Block Storage Tag Mask: 0 00:13:41.121 Protection Information Capabilities: 00:13:41.121 16b Guard Protection Information Storage Tag Support: No 00:13:41.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:41.121 Storage Tag Check Read Support: No 00:13:41.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.121 Namespace ID:3 00:13:41.121 Error Recovery Timeout: Unlimited 00:13:41.121 Command Set Identifier: NVM (00h) 00:13:41.121 Deallocate: Supported 00:13:41.121 Deallocated/Unwritten Error: Supported 00:13:41.121 Deallocated Read Value: All 0x00 00:13:41.121 Deallocate in Write Zeroes: Not Supported 00:13:41.121 Deallocated Guard Field: 0xFFFF 00:13:41.121 Flush: Supported 00:13:41.121 Reservation: Not Supported 00:13:41.121 Namespace Sharing Capabilities: Private 00:13:41.121 Size (in LBAs): 1048576 (4GiB) 00:13:41.121 Capacity (in LBAs): 1048576 (4GiB) 00:13:41.121 Utilization (in LBAs): 1048576 (4GiB) 00:13:41.122 Thin Provisioning: Not Supported 00:13:41.122 Per-NS Atomic Units: No 00:13:41.122 Maximum Single Source Range Length: 128 00:13:41.122 Maximum Copy Length: 128 00:13:41.122 Maximum Source Range Count: 128 00:13:41.122 NGUID/EUI64 Never Reused: No 00:13:41.122 Namespace Write Protected: No 00:13:41.122 Number of LBA Formats: 8 00:13:41.122 Current LBA Format: LBA Format #04 00:13:41.122 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:41.122 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:41.122 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:41.122 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:41.122 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:41.122 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:41.122 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:41.122 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:41.122 00:13:41.122 NVM Specific Namespace Data 00:13:41.122 =========================== 00:13:41.122 Logical Block Storage Tag Mask: 0 00:13:41.122 Protection Information Capabilities: 00:13:41.122 16b Guard Protection Information Storage Tag Support: No 00:13:41.122 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:41.122 Storage Tag Check Read Support: No 00:13:41.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.122 13:39:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:41.122 13:39:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:13:41.380 ===================================================== 00:13:41.380 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:41.380 ===================================================== 00:13:41.380 Controller Capabilities/Features 00:13:41.380 ================================ 00:13:41.381 Vendor ID: 1b36 00:13:41.381 Subsystem Vendor ID: 1af4 00:13:41.381 Serial Number: 12343 00:13:41.381 Model Number: QEMU NVMe Ctrl 00:13:41.381 Firmware Version: 8.0.0 00:13:41.381 Recommended Arb Burst: 6 00:13:41.381 IEEE OUI Identifier: 00 54 52 00:13:41.381 Multi-path I/O 00:13:41.381 May have multiple subsystem ports: No 00:13:41.381 May have multiple controllers: Yes 00:13:41.381 Associated with SR-IOV VF: No 00:13:41.381 Max Data Transfer Size: 524288 00:13:41.381 Max Number of Namespaces: 256 00:13:41.381 Max Number of I/O Queues: 64 00:13:41.381 NVMe Specification Version (VS): 1.4 00:13:41.381 NVMe Specification Version (Identify): 1.4 00:13:41.381 Maximum Queue Entries: 2048 00:13:41.381 Contiguous Queues Required: Yes 00:13:41.381 Arbitration Mechanisms Supported 00:13:41.381 Weighted Round Robin: Not Supported 00:13:41.381 Vendor Specific: Not Supported 00:13:41.381 Reset Timeout: 7500 ms 00:13:41.381 Doorbell Stride: 4 bytes 00:13:41.381 NVM Subsystem Reset: Not Supported 00:13:41.381 Command Sets Supported 00:13:41.381 NVM Command Set: Supported 00:13:41.381 Boot Partition: Not Supported 00:13:41.381 Memory Page Size Minimum: 4096 bytes 00:13:41.381 Memory Page Size Maximum: 65536 bytes 00:13:41.381 Persistent Memory Region: Not Supported 00:13:41.381 Optional Asynchronous Events Supported 00:13:41.381 Namespace Attribute Notices: Supported 00:13:41.381 Firmware Activation Notices: Not Supported 00:13:41.381 ANA Change Notices: Not Supported 00:13:41.381 PLE Aggregate Log Change Notices: Not Supported 00:13:41.381 LBA Status Info Alert Notices: Not Supported 00:13:41.381 EGE Aggregate Log Change Notices: Not Supported 00:13:41.381 Normal NVM Subsystem Shutdown event: Not Supported 00:13:41.381 Zone Descriptor Change Notices: Not Supported 00:13:41.381 Discovery Log Change Notices: Not Supported 00:13:41.381 Controller Attributes 00:13:41.381 128-bit Host Identifier: Not Supported 00:13:41.381 Non-Operational Permissive Mode: Not Supported 00:13:41.381 NVM Sets: Not Supported 00:13:41.381 Read Recovery Levels: Not Supported 00:13:41.381 Endurance Groups: Supported 00:13:41.381 Predictable Latency Mode: Not Supported 00:13:41.381 Traffic Based Keep ALive: Not Supported 00:13:41.381 Namespace Granularity: Not Supported 00:13:41.381 SQ Associations: Not Supported 00:13:41.381 UUID List: Not Supported 00:13:41.381 Multi-Domain Subsystem: Not Supported 00:13:41.381 Fixed Capacity Management: Not Supported 00:13:41.381 Variable Capacity Management: Not Supported 00:13:41.381 Delete Endurance Group: Not Supported 00:13:41.381 Delete NVM Set: Not Supported 00:13:41.381 Extended LBA Formats Supported: Supported 00:13:41.381 Flexible Data Placement Supported: Supported 00:13:41.381 00:13:41.381 Controller Memory Buffer Support 00:13:41.381 ================================ 00:13:41.381 Supported: No 00:13:41.381 00:13:41.381 Persistent Memory Region Support 00:13:41.381 ================================ 00:13:41.381 Supported: No 00:13:41.381 00:13:41.381 Admin Command Set Attributes 00:13:41.381 ============================ 00:13:41.381 Security Send/Receive: Not Supported 00:13:41.381 Format NVM: Supported 00:13:41.381 Firmware Activate/Download: Not Supported 00:13:41.381 Namespace Management: Supported 00:13:41.381 Device Self-Test: Not Supported 00:13:41.381 Directives: Supported 00:13:41.381 NVMe-MI: Not Supported 00:13:41.381 Virtualization Management: Not Supported 00:13:41.381 Doorbell Buffer Config: Supported 00:13:41.381 Get LBA Status Capability: Not Supported 00:13:41.381 Command & Feature Lockdown Capability: Not Supported 00:13:41.381 Abort Command Limit: 4 00:13:41.381 Async Event Request Limit: 4 00:13:41.381 Number of Firmware Slots: N/A 00:13:41.381 Firmware Slot 1 Read-Only: N/A 00:13:41.381 Firmware Activation Without Reset: N/A 00:13:41.381 Multiple Update Detection Support: N/A 00:13:41.381 Firmware Update Granularity: No Information Provided 00:13:41.381 Per-Namespace SMART Log: Yes 00:13:41.381 Asymmetric Namespace Access Log Page: Not Supported 00:13:41.381 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:41.381 Command Effects Log Page: Supported 00:13:41.381 Get Log Page Extended Data: Supported 00:13:41.381 Telemetry Log Pages: Not Supported 00:13:41.381 Persistent Event Log Pages: Not Supported 00:13:41.381 Supported Log Pages Log Page: May Support 00:13:41.381 Commands Supported & Effects Log Page: Not Supported 00:13:41.381 Feature Identifiers & Effects Log Page:May Support 00:13:41.381 NVMe-MI Commands & Effects Log Page: May Support 00:13:41.381 Data Area 4 for Telemetry Log: Not Supported 00:13:41.381 Error Log Page Entries Supported: 1 00:13:41.381 Keep Alive: Not Supported 00:13:41.381 00:13:41.381 NVM Command Set Attributes 00:13:41.381 ========================== 00:13:41.381 Submission Queue Entry Size 00:13:41.381 Max: 64 00:13:41.381 Min: 64 00:13:41.381 Completion Queue Entry Size 00:13:41.381 Max: 16 00:13:41.381 Min: 16 00:13:41.381 Number of Namespaces: 256 00:13:41.381 Compare Command: Supported 00:13:41.381 Write Uncorrectable Command: Not Supported 00:13:41.381 Dataset Management Command: Supported 00:13:41.381 Write Zeroes Command: Supported 00:13:41.381 Set Features Save Field: Supported 00:13:41.381 Reservations: Not Supported 00:13:41.381 Timestamp: Supported 00:13:41.381 Copy: Supported 00:13:41.381 Volatile Write Cache: Present 00:13:41.381 Atomic Write Unit (Normal): 1 00:13:41.381 Atomic Write Unit (PFail): 1 00:13:41.381 Atomic Compare & Write Unit: 1 00:13:41.381 Fused Compare & Write: Not Supported 00:13:41.381 Scatter-Gather List 00:13:41.381 SGL Command Set: Supported 00:13:41.381 SGL Keyed: Not Supported 00:13:41.381 SGL Bit Bucket Descriptor: Not Supported 00:13:41.381 SGL Metadata Pointer: Not Supported 00:13:41.381 Oversized SGL: Not Supported 00:13:41.381 SGL Metadata Address: Not Supported 00:13:41.381 SGL Offset: Not Supported 00:13:41.381 Transport SGL Data Block: Not Supported 00:13:41.381 Replay Protected Memory Block: Not Supported 00:13:41.381 00:13:41.381 Firmware Slot Information 00:13:41.381 ========================= 00:13:41.381 Active slot: 1 00:13:41.381 Slot 1 Firmware Revision: 1.0 00:13:41.381 00:13:41.381 00:13:41.381 Commands Supported and Effects 00:13:41.381 ============================== 00:13:41.381 Admin Commands 00:13:41.381 -------------- 00:13:41.381 Delete I/O Submission Queue (00h): Supported 00:13:41.381 Create I/O Submission Queue (01h): Supported 00:13:41.381 Get Log Page (02h): Supported 00:13:41.381 Delete I/O Completion Queue (04h): Supported 00:13:41.381 Create I/O Completion Queue (05h): Supported 00:13:41.381 Identify (06h): Supported 00:13:41.381 Abort (08h): Supported 00:13:41.381 Set Features (09h): Supported 00:13:41.381 Get Features (0Ah): Supported 00:13:41.381 Asynchronous Event Request (0Ch): Supported 00:13:41.381 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:41.381 Directive Send (19h): Supported 00:13:41.381 Directive Receive (1Ah): Supported 00:13:41.381 Virtualization Management (1Ch): Supported 00:13:41.381 Doorbell Buffer Config (7Ch): Supported 00:13:41.381 Format NVM (80h): Supported LBA-Change 00:13:41.381 I/O Commands 00:13:41.381 ------------ 00:13:41.381 Flush (00h): Supported LBA-Change 00:13:41.381 Write (01h): Supported LBA-Change 00:13:41.381 Read (02h): Supported 00:13:41.381 Compare (05h): Supported 00:13:41.381 Write Zeroes (08h): Supported LBA-Change 00:13:41.381 Dataset Management (09h): Supported LBA-Change 00:13:41.381 Unknown (0Ch): Supported 00:13:41.381 Unknown (12h): Supported 00:13:41.381 Copy (19h): Supported LBA-Change 00:13:41.381 Unknown (1Dh): Supported LBA-Change 00:13:41.381 00:13:41.381 Error Log 00:13:41.381 ========= 00:13:41.381 00:13:41.381 Arbitration 00:13:41.381 =========== 00:13:41.381 Arbitration Burst: no limit 00:13:41.381 00:13:41.381 Power Management 00:13:41.381 ================ 00:13:41.381 Number of Power States: 1 00:13:41.381 Current Power State: Power State #0 00:13:41.381 Power State #0: 00:13:41.381 Max Power: 25.00 W 00:13:41.381 Non-Operational State: Operational 00:13:41.381 Entry Latency: 16 microseconds 00:13:41.381 Exit Latency: 4 microseconds 00:13:41.381 Relative Read Throughput: 0 00:13:41.381 Relative Read Latency: 0 00:13:41.381 Relative Write Throughput: 0 00:13:41.381 Relative Write Latency: 0 00:13:41.381 Idle Power: Not Reported 00:13:41.381 Active Power: Not Reported 00:13:41.381 Non-Operational Permissive Mode: Not Supported 00:13:41.381 00:13:41.381 Health Information 00:13:41.381 ================== 00:13:41.381 Critical Warnings: 00:13:41.381 Available Spare Space: OK 00:13:41.381 Temperature: OK 00:13:41.382 Device Reliability: OK 00:13:41.382 Read Only: No 00:13:41.382 Volatile Memory Backup: OK 00:13:41.382 Current Temperature: 323 Kelvin (50 Celsius) 00:13:41.382 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:41.382 Available Spare: 0% 00:13:41.382 Available Spare Threshold: 0% 00:13:41.382 Life Percentage Used: 0% 00:13:41.382 Data Units Read: 699 00:13:41.382 Data Units Written: 628 00:13:41.382 Host Read Commands: 32960 00:13:41.382 Host Write Commands: 32383 00:13:41.382 Controller Busy Time: 0 minutes 00:13:41.382 Power Cycles: 0 00:13:41.382 Power On Hours: 0 hours 00:13:41.382 Unsafe Shutdowns: 0 00:13:41.382 Unrecoverable Media Errors: 0 00:13:41.382 Lifetime Error Log Entries: 0 00:13:41.382 Warning Temperature Time: 0 minutes 00:13:41.382 Critical Temperature Time: 0 minutes 00:13:41.382 00:13:41.382 Number of Queues 00:13:41.382 ================ 00:13:41.382 Number of I/O Submission Queues: 64 00:13:41.382 Number of I/O Completion Queues: 64 00:13:41.382 00:13:41.382 ZNS Specific Controller Data 00:13:41.382 ============================ 00:13:41.382 Zone Append Size Limit: 0 00:13:41.382 00:13:41.382 00:13:41.382 Active Namespaces 00:13:41.382 ================= 00:13:41.382 Namespace ID:1 00:13:41.382 Error Recovery Timeout: Unlimited 00:13:41.382 Command Set Identifier: NVM (00h) 00:13:41.382 Deallocate: Supported 00:13:41.382 Deallocated/Unwritten Error: Supported 00:13:41.382 Deallocated Read Value: All 0x00 00:13:41.382 Deallocate in Write Zeroes: Not Supported 00:13:41.382 Deallocated Guard Field: 0xFFFF 00:13:41.382 Flush: Supported 00:13:41.382 Reservation: Not Supported 00:13:41.382 Namespace Sharing Capabilities: Multiple Controllers 00:13:41.382 Size (in LBAs): 262144 (1GiB) 00:13:41.382 Capacity (in LBAs): 262144 (1GiB) 00:13:41.382 Utilization (in LBAs): 262144 (1GiB) 00:13:41.382 Thin Provisioning: Not Supported 00:13:41.382 Per-NS Atomic Units: No 00:13:41.382 Maximum Single Source Range Length: 128 00:13:41.382 Maximum Copy Length: 128 00:13:41.382 Maximum Source Range Count: 128 00:13:41.382 NGUID/EUI64 Never Reused: No 00:13:41.382 Namespace Write Protected: No 00:13:41.382 Endurance group ID: 1 00:13:41.382 Number of LBA Formats: 8 00:13:41.382 Current LBA Format: LBA Format #04 00:13:41.382 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:41.382 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:41.382 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:41.382 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:41.382 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:41.382 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:41.382 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:41.382 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:41.382 00:13:41.382 Get Feature FDP: 00:13:41.382 ================ 00:13:41.382 Enabled: Yes 00:13:41.382 FDP configuration index: 0 00:13:41.382 00:13:41.382 FDP configurations log page 00:13:41.382 =========================== 00:13:41.382 Number of FDP configurations: 1 00:13:41.382 Version: 0 00:13:41.382 Size: 112 00:13:41.382 FDP Configuration Descriptor: 0 00:13:41.382 Descriptor Size: 96 00:13:41.382 Reclaim Group Identifier format: 2 00:13:41.382 FDP Volatile Write Cache: Not Present 00:13:41.382 FDP Configuration: Valid 00:13:41.382 Vendor Specific Size: 0 00:13:41.382 Number of Reclaim Groups: 2 00:13:41.382 Number of Recalim Unit Handles: 8 00:13:41.382 Max Placement Identifiers: 128 00:13:41.382 Number of Namespaces Suppprted: 256 00:13:41.382 Reclaim unit Nominal Size: 6000000 bytes 00:13:41.382 Estimated Reclaim Unit Time Limit: Not Reported 00:13:41.382 RUH Desc #000: RUH Type: Initially Isolated 00:13:41.382 RUH Desc #001: RUH Type: Initially Isolated 00:13:41.382 RUH Desc #002: RUH Type: Initially Isolated 00:13:41.382 RUH Desc #003: RUH Type: Initially Isolated 00:13:41.382 RUH Desc #004: RUH Type: Initially Isolated 00:13:41.382 RUH Desc #005: RUH Type: Initially Isolated 00:13:41.382 RUH Desc #006: RUH Type: Initially Isolated 00:13:41.382 RUH Desc #007: RUH Type: Initially Isolated 00:13:41.382 00:13:41.382 FDP reclaim unit handle usage log page 00:13:41.641 ====================================== 00:13:41.641 Number of Reclaim Unit Handles: 8 00:13:41.641 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:41.641 RUH Usage Desc #001: RUH Attributes: Unused 00:13:41.641 RUH Usage Desc #002: RUH Attributes: Unused 00:13:41.641 RUH Usage Desc #003: RUH Attributes: Unused 00:13:41.641 RUH Usage Desc #004: RUH Attributes: Unused 00:13:41.641 RUH Usage Desc #005: RUH Attributes: Unused 00:13:41.641 RUH Usage Desc #006: RUH Attributes: Unused 00:13:41.641 RUH Usage Desc #007: RUH Attributes: Unused 00:13:41.641 00:13:41.641 FDP statistics log page 00:13:41.641 ======================= 00:13:41.641 Host bytes with metadata written: 400596992 00:13:41.641 Media bytes with metadata written: 400637952 00:13:41.641 Media bytes erased: 0 00:13:41.641 00:13:41.641 FDP events log page 00:13:41.641 =================== 00:13:41.641 Number of FDP events: 0 00:13:41.641 00:13:41.641 NVM Specific Namespace Data 00:13:41.641 =========================== 00:13:41.641 Logical Block Storage Tag Mask: 0 00:13:41.641 Protection Information Capabilities: 00:13:41.641 16b Guard Protection Information Storage Tag Support: No 00:13:41.641 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:41.641 Storage Tag Check Read Support: No 00:13:41.641 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:41.641 00:13:41.641 real 0m2.043s 00:13:41.641 user 0m0.838s 00:13:41.641 sys 0m0.999s 00:13:41.641 13:39:35 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:41.641 ************************************ 00:13:41.641 END TEST nvme_identify 00:13:41.641 ************************************ 00:13:41.641 13:39:35 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:13:41.641 13:39:35 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:13:41.641 13:39:35 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:41.641 13:39:35 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:41.641 13:39:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.641 ************************************ 00:13:41.641 START TEST nvme_perf 00:13:41.641 ************************************ 00:13:41.641 13:39:35 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:13:41.641 13:39:35 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:13:43.019 Initializing NVMe Controllers 00:13:43.019 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:43.019 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:43.019 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:43.019 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:43.019 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:43.019 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:43.019 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:43.019 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:43.019 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:43.019 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:43.019 Initialization complete. Launching workers. 00:13:43.019 ======================================================== 00:13:43.019 Latency(us) 00:13:43.019 Device Information : IOPS MiB/s Average min max 00:13:43.019 PCIE (0000:00:13.0) NSID 1 from core 0: 11344.73 132.95 11317.73 8580.21 55748.28 00:13:43.019 PCIE (0000:00:10.0) NSID 1 from core 0: 11344.73 132.95 11286.27 8489.10 52701.82 00:13:43.019 PCIE (0000:00:11.0) NSID 1 from core 0: 11344.73 132.95 11255.68 8628.52 49037.60 00:13:43.019 PCIE (0000:00:12.0) NSID 1 from core 0: 11344.73 132.95 11224.95 8563.27 46096.94 00:13:43.020 PCIE (0000:00:12.0) NSID 2 from core 0: 11344.73 132.95 11192.38 8519.70 42667.39 00:13:43.020 PCIE (0000:00:12.0) NSID 3 from core 0: 11408.46 133.69 11099.53 8620.53 33933.42 00:13:43.020 ======================================================== 00:13:43.020 Total : 68132.09 798.42 11229.30 8489.10 55748.28 00:13:43.020 00:13:43.020 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:43.020 ================================================================================= 00:13:43.020 1.00000% : 8987.794us 00:13:43.020 10.00000% : 9986.438us 00:13:43.020 25.00000% : 10360.930us 00:13:43.020 50.00000% : 10797.836us 00:13:43.020 75.00000% : 11297.158us 00:13:43.020 90.00000% : 12046.141us 00:13:43.020 95.00000% : 12919.954us 00:13:43.020 98.00000% : 14168.259us 00:13:43.020 99.00000% : 46436.937us 00:13:43.020 99.50000% : 53677.105us 00:13:43.020 99.90000% : 55424.731us 00:13:43.020 99.99000% : 55924.053us 00:13:43.020 99.99900% : 55924.053us 00:13:43.020 99.99990% : 55924.053us 00:13:43.020 99.99999% : 55924.053us 00:13:43.020 00:13:43.020 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:43.020 ================================================================================= 00:13:43.020 1.00000% : 8925.379us 00:13:43.020 10.00000% : 9861.608us 00:13:43.020 25.00000% : 10298.514us 00:13:43.020 50.00000% : 10860.251us 00:13:43.020 75.00000% : 11359.573us 00:13:43.020 90.00000% : 12108.556us 00:13:43.020 95.00000% : 12857.539us 00:13:43.020 98.00000% : 14355.505us 00:13:43.020 99.00000% : 43191.345us 00:13:43.020 99.50000% : 50181.851us 00:13:43.020 99.90000% : 52428.800us 00:13:43.020 99.99000% : 52678.461us 00:13:43.020 99.99900% : 52928.122us 00:13:43.020 99.99990% : 52928.122us 00:13:43.020 99.99999% : 52928.122us 00:13:43.020 00:13:43.020 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:43.020 ================================================================================= 00:13:43.020 1.00000% : 8987.794us 00:13:43.020 10.00000% : 9986.438us 00:13:43.020 25.00000% : 10360.930us 00:13:43.020 50.00000% : 10797.836us 00:13:43.020 75.00000% : 11297.158us 00:13:43.020 90.00000% : 12108.556us 00:13:43.020 95.00000% : 12919.954us 00:13:43.020 98.00000% : 14792.411us 00:13:43.020 99.00000% : 39696.091us 00:13:43.020 99.50000% : 46686.598us 00:13:43.020 99.90000% : 48683.886us 00:13:43.020 99.99000% : 49183.208us 00:13:43.020 99.99900% : 49183.208us 00:13:43.020 99.99990% : 49183.208us 00:13:43.020 99.99999% : 49183.208us 00:13:43.020 00:13:43.020 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:43.020 ================================================================================= 00:13:43.020 1.00000% : 8925.379us 00:13:43.020 10.00000% : 9986.438us 00:13:43.020 25.00000% : 10360.930us 00:13:43.020 50.00000% : 10797.836us 00:13:43.020 75.00000% : 11297.158us 00:13:43.020 90.00000% : 12170.971us 00:13:43.020 95.00000% : 13044.785us 00:13:43.020 98.00000% : 14729.996us 00:13:43.020 99.00000% : 36700.160us 00:13:43.020 99.50000% : 43940.328us 00:13:43.020 99.90000% : 45687.954us 00:13:43.020 99.99000% : 46187.276us 00:13:43.020 99.99900% : 46187.276us 00:13:43.020 99.99990% : 46187.276us 00:13:43.020 99.99999% : 46187.276us 00:13:43.020 00:13:43.020 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:43.020 ================================================================================= 00:13:43.020 1.00000% : 8925.379us 00:13:43.020 10.00000% : 9986.438us 00:13:43.020 25.00000% : 10360.930us 00:13:43.020 50.00000% : 10797.836us 00:13:43.020 75.00000% : 11297.158us 00:13:43.020 90.00000% : 12170.971us 00:13:43.020 95.00000% : 13107.200us 00:13:43.020 98.00000% : 14605.166us 00:13:43.020 99.00000% : 33204.907us 00:13:43.020 99.50000% : 40445.074us 00:13:43.020 99.90000% : 42192.701us 00:13:43.020 99.99000% : 42692.023us 00:13:43.020 99.99900% : 42692.023us 00:13:43.020 99.99990% : 42692.023us 00:13:43.020 99.99999% : 42692.023us 00:13:43.020 00:13:43.020 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:43.020 ================================================================================= 00:13:43.020 1.00000% : 8987.794us 00:13:43.020 10.00000% : 9986.438us 00:13:43.020 25.00000% : 10360.930us 00:13:43.020 50.00000% : 10797.836us 00:13:43.020 75.00000% : 11297.158us 00:13:43.020 90.00000% : 12108.556us 00:13:43.020 95.00000% : 13107.200us 00:13:43.020 98.00000% : 14480.335us 00:13:43.020 99.00000% : 24591.604us 00:13:43.020 99.50000% : 31582.110us 00:13:43.020 99.90000% : 33704.229us 00:13:43.020 99.99000% : 33953.890us 00:13:43.020 99.99900% : 33953.890us 00:13:43.020 99.99990% : 33953.890us 00:13:43.020 99.99999% : 33953.890us 00:13:43.020 00:13:43.020 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:43.020 ============================================================================== 00:13:43.020 Range in us Cumulative IO count 00:13:43.020 8550.888 - 8613.303: 0.0263% ( 3) 00:13:43.020 8613.303 - 8675.718: 0.0966% ( 8) 00:13:43.020 8675.718 - 8738.133: 0.2370% ( 16) 00:13:43.020 8738.133 - 8800.549: 0.4126% ( 20) 00:13:43.020 8800.549 - 8862.964: 0.6232% ( 24) 00:13:43.020 8862.964 - 8925.379: 0.8954% ( 31) 00:13:43.020 8925.379 - 8987.794: 1.2992% ( 46) 00:13:43.020 8987.794 - 9050.210: 1.6854% ( 44) 00:13:43.020 9050.210 - 9112.625: 2.1243% ( 50) 00:13:43.020 9112.625 - 9175.040: 2.6246% ( 57) 00:13:43.020 9175.040 - 9237.455: 3.0723% ( 51) 00:13:43.020 9237.455 - 9299.870: 3.5551% ( 55) 00:13:43.020 9299.870 - 9362.286: 4.1257% ( 65) 00:13:43.020 9362.286 - 9424.701: 4.6348% ( 58) 00:13:43.020 9424.701 - 9487.116: 5.2230% ( 67) 00:13:43.020 9487.116 - 9549.531: 5.7409% ( 59) 00:13:43.020 9549.531 - 9611.947: 6.3027% ( 64) 00:13:43.020 9611.947 - 9674.362: 6.8732% ( 65) 00:13:43.020 9674.362 - 9736.777: 7.4263% ( 63) 00:13:43.020 9736.777 - 9799.192: 8.0495% ( 71) 00:13:43.020 9799.192 - 9861.608: 8.8483% ( 91) 00:13:43.020 9861.608 - 9924.023: 9.7963% ( 108) 00:13:43.020 9924.023 - 9986.438: 11.1482% ( 154) 00:13:43.020 9986.438 - 10048.853: 12.8511% ( 194) 00:13:43.020 10048.853 - 10111.269: 14.8789% ( 231) 00:13:43.020 10111.269 - 10173.684: 17.1524% ( 259) 00:13:43.020 10173.684 - 10236.099: 19.6454% ( 284) 00:13:43.020 10236.099 - 10298.514: 22.5334% ( 329) 00:13:43.020 10298.514 - 10360.930: 25.5706% ( 346) 00:13:43.020 10360.930 - 10423.345: 28.8975% ( 379) 00:13:43.020 10423.345 - 10485.760: 32.3121% ( 389) 00:13:43.020 10485.760 - 10548.175: 35.7356% ( 390) 00:13:43.020 10548.175 - 10610.590: 39.2556% ( 401) 00:13:43.020 10610.590 - 10673.006: 42.6352% ( 385) 00:13:43.020 10673.006 - 10735.421: 46.3746% ( 426) 00:13:43.020 10735.421 - 10797.836: 50.0088% ( 414) 00:13:43.020 10797.836 - 10860.251: 53.7570% ( 427) 00:13:43.020 10860.251 - 10922.667: 57.5316% ( 430) 00:13:43.020 10922.667 - 10985.082: 61.2798% ( 427) 00:13:43.020 10985.082 - 11047.497: 64.9228% ( 415) 00:13:43.020 11047.497 - 11109.912: 68.3111% ( 386) 00:13:43.020 11109.912 - 11172.328: 71.4273% ( 355) 00:13:43.020 11172.328 - 11234.743: 74.1573% ( 311) 00:13:43.020 11234.743 - 11297.158: 76.6503% ( 284) 00:13:43.020 11297.158 - 11359.573: 78.8799% ( 254) 00:13:43.020 11359.573 - 11421.989: 80.7584% ( 214) 00:13:43.020 11421.989 - 11484.404: 82.2507% ( 170) 00:13:43.020 11484.404 - 11546.819: 83.6025% ( 154) 00:13:43.020 11546.819 - 11609.234: 84.7612% ( 132) 00:13:43.020 11609.234 - 11671.650: 85.7707% ( 115) 00:13:43.020 11671.650 - 11734.065: 86.7188% ( 108) 00:13:43.020 11734.065 - 11796.480: 87.6317% ( 104) 00:13:43.020 11796.480 - 11858.895: 88.4568% ( 94) 00:13:43.020 11858.895 - 11921.310: 89.1942% ( 84) 00:13:43.020 11921.310 - 11983.726: 89.8789% ( 78) 00:13:43.020 11983.726 - 12046.141: 90.5548% ( 77) 00:13:43.020 12046.141 - 12108.556: 91.1605% ( 69) 00:13:43.020 12108.556 - 12170.971: 91.6871% ( 60) 00:13:43.020 12170.971 - 12233.387: 92.1261% ( 50) 00:13:43.020 12233.387 - 12295.802: 92.4947% ( 42) 00:13:43.020 12295.802 - 12358.217: 92.8283% ( 38) 00:13:43.020 12358.217 - 12420.632: 93.1970% ( 42) 00:13:43.020 12420.632 - 12483.048: 93.4779% ( 32) 00:13:43.020 12483.048 - 12545.463: 93.7939% ( 36) 00:13:43.020 12545.463 - 12607.878: 94.0572% ( 30) 00:13:43.020 12607.878 - 12670.293: 94.2679% ( 24) 00:13:43.020 12670.293 - 12732.709: 94.4874% ( 25) 00:13:43.020 12732.709 - 12795.124: 94.7244% ( 27) 00:13:43.020 12795.124 - 12857.539: 94.9087% ( 21) 00:13:43.020 12857.539 - 12919.954: 95.0843% ( 20) 00:13:43.020 12919.954 - 12982.370: 95.2335% ( 17) 00:13:43.020 12982.370 - 13044.785: 95.4178% ( 21) 00:13:43.020 13044.785 - 13107.200: 95.5846% ( 19) 00:13:43.020 13107.200 - 13169.615: 95.7777% ( 22) 00:13:43.020 13169.615 - 13232.030: 95.9709% ( 22) 00:13:43.020 13232.030 - 13294.446: 96.1376% ( 19) 00:13:43.020 13294.446 - 13356.861: 96.3220% ( 21) 00:13:43.020 13356.861 - 13419.276: 96.4888% ( 19) 00:13:43.020 13419.276 - 13481.691: 96.6468% ( 18) 00:13:43.020 13481.691 - 13544.107: 96.7872% ( 16) 00:13:43.020 13544.107 - 13606.522: 96.9101% ( 14) 00:13:43.020 13606.522 - 13668.937: 97.0418% ( 15) 00:13:43.020 13668.937 - 13731.352: 97.1822% ( 16) 00:13:43.020 13731.352 - 13793.768: 97.3051% ( 14) 00:13:43.020 13793.768 - 13856.183: 97.4456% ( 16) 00:13:43.020 13856.183 - 13918.598: 97.5860% ( 16) 00:13:43.020 13918.598 - 13981.013: 97.6914% ( 12) 00:13:43.020 13981.013 - 14043.429: 97.8055% ( 13) 00:13:43.020 14043.429 - 14105.844: 97.9020% ( 11) 00:13:43.020 14105.844 - 14168.259: 98.0074% ( 12) 00:13:43.020 14168.259 - 14230.674: 98.0952% ( 10) 00:13:43.020 14230.674 - 14293.090: 98.1654% ( 8) 00:13:43.021 14293.090 - 14355.505: 98.2268% ( 7) 00:13:43.021 14355.505 - 14417.920: 98.2971% ( 8) 00:13:43.021 14417.920 - 14480.335: 98.3761% ( 9) 00:13:43.021 14480.335 - 14542.750: 98.4375% ( 7) 00:13:43.021 14542.750 - 14605.166: 98.4726% ( 4) 00:13:43.021 14605.166 - 14667.581: 98.5077% ( 4) 00:13:43.021 14667.581 - 14729.996: 98.5516% ( 5) 00:13:43.021 14729.996 - 14792.411: 98.5955% ( 5) 00:13:43.021 14792.411 - 14854.827: 98.6306% ( 4) 00:13:43.021 14854.827 - 14917.242: 98.6657% ( 4) 00:13:43.021 14917.242 - 14979.657: 98.6833% ( 2) 00:13:43.021 14979.657 - 15042.072: 98.6921% ( 1) 00:13:43.021 15042.072 - 15104.488: 98.7096% ( 2) 00:13:43.021 15104.488 - 15166.903: 98.7272% ( 2) 00:13:43.021 15166.903 - 15229.318: 98.7447% ( 2) 00:13:43.021 15229.318 - 15291.733: 98.7623% ( 2) 00:13:43.021 15291.733 - 15354.149: 98.7798% ( 2) 00:13:43.021 15354.149 - 15416.564: 98.7886% ( 1) 00:13:43.021 15416.564 - 15478.979: 98.8150% ( 3) 00:13:43.021 15478.979 - 15541.394: 98.8325% ( 2) 00:13:43.021 15541.394 - 15603.810: 98.8501% ( 2) 00:13:43.021 15603.810 - 15666.225: 98.8676% ( 2) 00:13:43.021 15666.225 - 15728.640: 98.8764% ( 1) 00:13:43.021 45687.954 - 45937.615: 98.9027% ( 3) 00:13:43.021 45937.615 - 46187.276: 98.9642% ( 7) 00:13:43.021 46187.276 - 46436.937: 99.0169% ( 6) 00:13:43.021 46436.937 - 46686.598: 99.0695% ( 6) 00:13:43.021 46686.598 - 46936.259: 99.1222% ( 6) 00:13:43.021 46936.259 - 47185.920: 99.1749% ( 6) 00:13:43.021 47185.920 - 47435.581: 99.2363% ( 7) 00:13:43.021 47435.581 - 47685.242: 99.2890% ( 6) 00:13:43.021 47685.242 - 47934.903: 99.3416% ( 6) 00:13:43.021 47934.903 - 48184.564: 99.3855% ( 5) 00:13:43.021 48184.564 - 48434.225: 99.4382% ( 6) 00:13:43.021 53177.783 - 53427.444: 99.4821% ( 5) 00:13:43.021 53427.444 - 53677.105: 99.5348% ( 6) 00:13:43.021 53677.105 - 53926.766: 99.5962% ( 7) 00:13:43.021 53926.766 - 54176.427: 99.6577% ( 7) 00:13:43.021 54176.427 - 54426.088: 99.7103% ( 6) 00:13:43.021 54426.088 - 54675.749: 99.7630% ( 6) 00:13:43.021 54675.749 - 54925.410: 99.8157% ( 6) 00:13:43.021 54925.410 - 55175.070: 99.8771% ( 7) 00:13:43.021 55175.070 - 55424.731: 99.9298% ( 6) 00:13:43.021 55424.731 - 55674.392: 99.9824% ( 6) 00:13:43.021 55674.392 - 55924.053: 100.0000% ( 2) 00:13:43.021 00:13:43.021 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:43.021 ============================================================================== 00:13:43.021 Range in us Cumulative IO count 00:13:43.021 8488.472 - 8550.888: 0.0527% ( 6) 00:13:43.021 8550.888 - 8613.303: 0.1843% ( 15) 00:13:43.021 8613.303 - 8675.718: 0.3160% ( 15) 00:13:43.021 8675.718 - 8738.133: 0.5004% ( 21) 00:13:43.021 8738.133 - 8800.549: 0.6935% ( 22) 00:13:43.021 8800.549 - 8862.964: 0.9568% ( 30) 00:13:43.021 8862.964 - 8925.379: 1.3343% ( 43) 00:13:43.021 8925.379 - 8987.794: 1.6766% ( 39) 00:13:43.021 8987.794 - 9050.210: 2.0804% ( 46) 00:13:43.021 9050.210 - 9112.625: 2.4930% ( 47) 00:13:43.021 9112.625 - 9175.040: 2.8880% ( 45) 00:13:43.021 9175.040 - 9237.455: 3.3181% ( 49) 00:13:43.021 9237.455 - 9299.870: 3.7482% ( 49) 00:13:43.021 9299.870 - 9362.286: 4.2135% ( 53) 00:13:43.021 9362.286 - 9424.701: 4.7138% ( 57) 00:13:43.021 9424.701 - 9487.116: 5.1966% ( 55) 00:13:43.021 9487.116 - 9549.531: 5.7409% ( 62) 00:13:43.021 9549.531 - 9611.947: 6.2851% ( 62) 00:13:43.021 9611.947 - 9674.362: 6.9084% ( 71) 00:13:43.021 9674.362 - 9736.777: 7.6984% ( 90) 00:13:43.021 9736.777 - 9799.192: 8.6991% ( 114) 00:13:43.021 9799.192 - 9861.608: 10.0509% ( 154) 00:13:43.021 9861.608 - 9924.023: 11.6134% ( 178) 00:13:43.021 9924.023 - 9986.438: 13.3339% ( 196) 00:13:43.021 9986.438 - 10048.853: 15.3353% ( 228) 00:13:43.021 10048.853 - 10111.269: 17.5562% ( 253) 00:13:43.021 10111.269 - 10173.684: 19.9526% ( 273) 00:13:43.021 10173.684 - 10236.099: 22.6387% ( 306) 00:13:43.021 10236.099 - 10298.514: 25.4126% ( 316) 00:13:43.021 10298.514 - 10360.930: 28.3883% ( 339) 00:13:43.021 10360.930 - 10423.345: 31.3290% ( 335) 00:13:43.021 10423.345 - 10485.760: 34.1643% ( 323) 00:13:43.021 10485.760 - 10548.175: 37.2279% ( 349) 00:13:43.021 10548.175 - 10610.590: 40.1159% ( 329) 00:13:43.021 10610.590 - 10673.006: 43.1706% ( 348) 00:13:43.021 10673.006 - 10735.421: 46.2693% ( 353) 00:13:43.021 10735.421 - 10797.836: 49.4821% ( 366) 00:13:43.021 10797.836 - 10860.251: 52.7124% ( 368) 00:13:43.021 10860.251 - 10922.667: 56.0305% ( 378) 00:13:43.021 10922.667 - 10985.082: 59.2170% ( 363) 00:13:43.021 10985.082 - 11047.497: 62.6931% ( 396) 00:13:43.021 11047.497 - 11109.912: 65.9498% ( 371) 00:13:43.021 11109.912 - 11172.328: 68.9343% ( 340) 00:13:43.021 11172.328 - 11234.743: 71.7609% ( 322) 00:13:43.021 11234.743 - 11297.158: 74.2363% ( 282) 00:13:43.021 11297.158 - 11359.573: 76.5449% ( 263) 00:13:43.021 11359.573 - 11421.989: 78.6254% ( 237) 00:13:43.021 11421.989 - 11484.404: 80.5829% ( 223) 00:13:43.021 11484.404 - 11546.819: 82.1454% ( 178) 00:13:43.021 11546.819 - 11609.234: 83.4884% ( 153) 00:13:43.021 11609.234 - 11671.650: 84.7349% ( 142) 00:13:43.021 11671.650 - 11734.065: 85.6654% ( 106) 00:13:43.021 11734.065 - 11796.480: 86.5695% ( 103) 00:13:43.021 11796.480 - 11858.895: 87.3771% ( 92) 00:13:43.021 11858.895 - 11921.310: 88.1935% ( 93) 00:13:43.021 11921.310 - 11983.726: 88.9572% ( 87) 00:13:43.021 11983.726 - 12046.141: 89.6594% ( 80) 00:13:43.021 12046.141 - 12108.556: 90.3441% ( 78) 00:13:43.021 12108.556 - 12170.971: 91.0112% ( 76) 00:13:43.021 12170.971 - 12233.387: 91.5643% ( 63) 00:13:43.021 12233.387 - 12295.802: 92.0734% ( 58) 00:13:43.021 12295.802 - 12358.217: 92.5913% ( 59) 00:13:43.021 12358.217 - 12420.632: 92.9775% ( 44) 00:13:43.021 12420.632 - 12483.048: 93.3725% ( 45) 00:13:43.021 12483.048 - 12545.463: 93.7939% ( 48) 00:13:43.021 12545.463 - 12607.878: 94.0660% ( 31) 00:13:43.021 12607.878 - 12670.293: 94.4435% ( 43) 00:13:43.021 12670.293 - 12732.709: 94.6541% ( 24) 00:13:43.021 12732.709 - 12795.124: 94.8560% ( 23) 00:13:43.021 12795.124 - 12857.539: 95.0579% ( 23) 00:13:43.021 12857.539 - 12919.954: 95.2511% ( 22) 00:13:43.021 12919.954 - 12982.370: 95.4091% ( 18) 00:13:43.021 12982.370 - 13044.785: 95.5758% ( 19) 00:13:43.021 13044.785 - 13107.200: 95.7251% ( 17) 00:13:43.021 13107.200 - 13169.615: 95.8831% ( 18) 00:13:43.021 13169.615 - 13232.030: 96.0147% ( 15) 00:13:43.021 13232.030 - 13294.446: 96.1903% ( 20) 00:13:43.021 13294.446 - 13356.861: 96.3308% ( 16) 00:13:43.021 13356.861 - 13419.276: 96.4712% ( 16) 00:13:43.021 13419.276 - 13481.691: 96.6204% ( 17) 00:13:43.021 13481.691 - 13544.107: 96.7697% ( 17) 00:13:43.021 13544.107 - 13606.522: 96.9277% ( 18) 00:13:43.021 13606.522 - 13668.937: 97.0769% ( 17) 00:13:43.021 13668.937 - 13731.352: 97.1822% ( 12) 00:13:43.021 13731.352 - 13793.768: 97.2612% ( 9) 00:13:43.021 13793.768 - 13856.183: 97.3754% ( 13) 00:13:43.021 13856.183 - 13918.598: 97.4807% ( 12) 00:13:43.021 13918.598 - 13981.013: 97.5860% ( 12) 00:13:43.021 13981.013 - 14043.429: 97.6650% ( 9) 00:13:43.021 14043.429 - 14105.844: 97.7353% ( 8) 00:13:43.021 14105.844 - 14168.259: 97.7967% ( 7) 00:13:43.021 14168.259 - 14230.674: 97.8933% ( 11) 00:13:43.021 14230.674 - 14293.090: 97.9723% ( 9) 00:13:43.021 14293.090 - 14355.505: 98.0600% ( 10) 00:13:43.021 14355.505 - 14417.920: 98.1127% ( 6) 00:13:43.021 14417.920 - 14480.335: 98.1566% ( 5) 00:13:43.021 14480.335 - 14542.750: 98.1829% ( 3) 00:13:43.021 14542.750 - 14605.166: 98.2268% ( 5) 00:13:43.021 14605.166 - 14667.581: 98.2619% ( 4) 00:13:43.021 14667.581 - 14729.996: 98.2883% ( 3) 00:13:43.021 14729.996 - 14792.411: 98.3146% ( 3) 00:13:43.021 14792.411 - 14854.827: 98.3497% ( 4) 00:13:43.021 14854.827 - 14917.242: 98.3761% ( 3) 00:13:43.021 14917.242 - 14979.657: 98.4287% ( 6) 00:13:43.021 14979.657 - 15042.072: 98.4551% ( 3) 00:13:43.021 15042.072 - 15104.488: 98.5077% ( 6) 00:13:43.021 15104.488 - 15166.903: 98.5253% ( 2) 00:13:43.021 15166.903 - 15229.318: 98.5604% ( 4) 00:13:43.021 15229.318 - 15291.733: 98.5955% ( 4) 00:13:43.021 15291.733 - 15354.149: 98.6218% ( 3) 00:13:43.021 15354.149 - 15416.564: 98.6394% ( 2) 00:13:43.021 15416.564 - 15478.979: 98.6570% ( 2) 00:13:43.021 15478.979 - 15541.394: 98.6833% ( 3) 00:13:43.021 15541.394 - 15603.810: 98.6921% ( 1) 00:13:43.021 15603.810 - 15666.225: 98.7096% ( 2) 00:13:43.021 15666.225 - 15728.640: 98.7184% ( 1) 00:13:43.021 15728.640 - 15791.055: 98.7360% ( 2) 00:13:43.021 15791.055 - 15853.470: 98.7623% ( 3) 00:13:43.021 15853.470 - 15915.886: 98.7798% ( 2) 00:13:43.021 15915.886 - 15978.301: 98.7886% ( 1) 00:13:43.021 15978.301 - 16103.131: 98.8237% ( 4) 00:13:43.021 16103.131 - 16227.962: 98.8501% ( 3) 00:13:43.021 16227.962 - 16352.792: 98.8764% ( 3) 00:13:43.021 42442.362 - 42692.023: 98.9203% ( 5) 00:13:43.021 42692.023 - 42941.684: 98.9642% ( 5) 00:13:43.021 42941.684 - 43191.345: 99.0169% ( 6) 00:13:43.021 43191.345 - 43441.006: 99.0783% ( 7) 00:13:43.021 43441.006 - 43690.667: 99.1222% ( 5) 00:13:43.021 43690.667 - 43940.328: 99.1661% ( 5) 00:13:43.021 43940.328 - 44189.989: 99.2188% ( 6) 00:13:43.021 44189.989 - 44439.650: 99.2714% ( 6) 00:13:43.021 44439.650 - 44689.310: 99.3153% ( 5) 00:13:43.021 44689.310 - 44938.971: 99.3680% ( 6) 00:13:43.021 44938.971 - 45188.632: 99.4206% ( 6) 00:13:43.021 45188.632 - 45438.293: 99.4382% ( 2) 00:13:43.021 49682.530 - 49932.190: 99.4645% ( 3) 00:13:43.021 49932.190 - 50181.851: 99.5084% ( 5) 00:13:43.021 50181.851 - 50431.512: 99.5611% ( 6) 00:13:43.021 50431.512 - 50681.173: 99.5962% ( 4) 00:13:43.021 50681.173 - 50930.834: 99.6489% ( 6) 00:13:43.021 50930.834 - 51180.495: 99.7015% ( 6) 00:13:43.021 51180.495 - 51430.156: 99.7454% ( 5) 00:13:43.022 51430.156 - 51679.817: 99.7981% ( 6) 00:13:43.022 51679.817 - 51929.478: 99.8420% ( 5) 00:13:43.022 51929.478 - 52179.139: 99.8947% ( 6) 00:13:43.022 52179.139 - 52428.800: 99.9386% ( 5) 00:13:43.022 52428.800 - 52678.461: 99.9912% ( 6) 00:13:43.022 52678.461 - 52928.122: 100.0000% ( 1) 00:13:43.022 00:13:43.022 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:43.022 ============================================================================== 00:13:43.022 Range in us Cumulative IO count 00:13:43.022 8613.303 - 8675.718: 0.0966% ( 11) 00:13:43.022 8675.718 - 8738.133: 0.2282% ( 15) 00:13:43.022 8738.133 - 8800.549: 0.4213% ( 22) 00:13:43.022 8800.549 - 8862.964: 0.6232% ( 23) 00:13:43.022 8862.964 - 8925.379: 0.8778% ( 29) 00:13:43.022 8925.379 - 8987.794: 1.2114% ( 38) 00:13:43.022 8987.794 - 9050.210: 1.5362% ( 37) 00:13:43.022 9050.210 - 9112.625: 1.9926% ( 52) 00:13:43.022 9112.625 - 9175.040: 2.4842% ( 56) 00:13:43.022 9175.040 - 9237.455: 3.0197% ( 61) 00:13:43.022 9237.455 - 9299.870: 3.5288% ( 58) 00:13:43.022 9299.870 - 9362.286: 4.0379% ( 58) 00:13:43.022 9362.286 - 9424.701: 4.5734% ( 61) 00:13:43.022 9424.701 - 9487.116: 5.1001% ( 60) 00:13:43.022 9487.116 - 9549.531: 5.7058% ( 69) 00:13:43.022 9549.531 - 9611.947: 6.2588% ( 63) 00:13:43.022 9611.947 - 9674.362: 6.8030% ( 62) 00:13:43.022 9674.362 - 9736.777: 7.4350% ( 72) 00:13:43.022 9736.777 - 9799.192: 8.1110% ( 77) 00:13:43.022 9799.192 - 9861.608: 8.8659% ( 86) 00:13:43.022 9861.608 - 9924.023: 9.8578% ( 113) 00:13:43.022 9924.023 - 9986.438: 11.3237% ( 167) 00:13:43.022 9986.438 - 10048.853: 13.0794% ( 200) 00:13:43.022 10048.853 - 10111.269: 15.0544% ( 225) 00:13:43.022 10111.269 - 10173.684: 17.3982% ( 267) 00:13:43.022 10173.684 - 10236.099: 20.0228% ( 299) 00:13:43.022 10236.099 - 10298.514: 22.9196% ( 330) 00:13:43.022 10298.514 - 10360.930: 26.0885% ( 361) 00:13:43.022 10360.930 - 10423.345: 29.3978% ( 377) 00:13:43.022 10423.345 - 10485.760: 32.8915% ( 398) 00:13:43.022 10485.760 - 10548.175: 36.3588% ( 395) 00:13:43.022 10548.175 - 10610.590: 39.8876% ( 402) 00:13:43.022 10610.590 - 10673.006: 43.4515% ( 406) 00:13:43.022 10673.006 - 10735.421: 47.1296% ( 419) 00:13:43.022 10735.421 - 10797.836: 50.7725% ( 415) 00:13:43.022 10797.836 - 10860.251: 54.4417% ( 418) 00:13:43.022 10860.251 - 10922.667: 57.9529% ( 400) 00:13:43.022 10922.667 - 10985.082: 61.5959% ( 415) 00:13:43.022 10985.082 - 11047.497: 65.1598% ( 406) 00:13:43.022 11047.497 - 11109.912: 68.4954% ( 380) 00:13:43.022 11109.912 - 11172.328: 71.3922% ( 330) 00:13:43.022 11172.328 - 11234.743: 73.8852% ( 284) 00:13:43.022 11234.743 - 11297.158: 76.2377% ( 268) 00:13:43.022 11297.158 - 11359.573: 78.3181% ( 237) 00:13:43.022 11359.573 - 11421.989: 80.1703% ( 211) 00:13:43.022 11421.989 - 11484.404: 81.6187% ( 165) 00:13:43.022 11484.404 - 11546.819: 82.9178% ( 148) 00:13:43.022 11546.819 - 11609.234: 83.9975% ( 123) 00:13:43.022 11609.234 - 11671.650: 85.0070% ( 115) 00:13:43.022 11671.650 - 11734.065: 85.8585% ( 97) 00:13:43.022 11734.065 - 11796.480: 86.7100% ( 97) 00:13:43.022 11796.480 - 11858.895: 87.4912% ( 89) 00:13:43.022 11858.895 - 11921.310: 88.2198% ( 83) 00:13:43.022 11921.310 - 11983.726: 88.9923% ( 88) 00:13:43.022 11983.726 - 12046.141: 89.6857% ( 79) 00:13:43.022 12046.141 - 12108.556: 90.3265% ( 73) 00:13:43.022 12108.556 - 12170.971: 90.9059% ( 66) 00:13:43.022 12170.971 - 12233.387: 91.4062% ( 57) 00:13:43.022 12233.387 - 12295.802: 91.8715% ( 53) 00:13:43.022 12295.802 - 12358.217: 92.3543% ( 55) 00:13:43.022 12358.217 - 12420.632: 92.7581% ( 46) 00:13:43.022 12420.632 - 12483.048: 93.1970% ( 50) 00:13:43.022 12483.048 - 12545.463: 93.5569% ( 41) 00:13:43.022 12545.463 - 12607.878: 93.8817% ( 37) 00:13:43.022 12607.878 - 12670.293: 94.1099% ( 26) 00:13:43.022 12670.293 - 12732.709: 94.3732% ( 30) 00:13:43.022 12732.709 - 12795.124: 94.6103% ( 27) 00:13:43.022 12795.124 - 12857.539: 94.8560% ( 28) 00:13:43.022 12857.539 - 12919.954: 95.1369% ( 32) 00:13:43.022 12919.954 - 12982.370: 95.3213% ( 21) 00:13:43.022 12982.370 - 13044.785: 95.5056% ( 21) 00:13:43.022 13044.785 - 13107.200: 95.6724% ( 19) 00:13:43.022 13107.200 - 13169.615: 95.8743% ( 23) 00:13:43.022 13169.615 - 13232.030: 96.0850% ( 24) 00:13:43.022 13232.030 - 13294.446: 96.2518% ( 19) 00:13:43.022 13294.446 - 13356.861: 96.4098% ( 18) 00:13:43.022 13356.861 - 13419.276: 96.5765% ( 19) 00:13:43.022 13419.276 - 13481.691: 96.7170% ( 16) 00:13:43.022 13481.691 - 13544.107: 96.8574% ( 16) 00:13:43.022 13544.107 - 13606.522: 96.9891% ( 15) 00:13:43.022 13606.522 - 13668.937: 97.1032% ( 13) 00:13:43.022 13668.937 - 13731.352: 97.2086% ( 12) 00:13:43.022 13731.352 - 13793.768: 97.3139% ( 12) 00:13:43.022 13793.768 - 13856.183: 97.4017% ( 10) 00:13:43.022 13856.183 - 13918.598: 97.4719% ( 8) 00:13:43.022 13918.598 - 13981.013: 97.5421% ( 8) 00:13:43.022 13981.013 - 14043.429: 97.6036% ( 7) 00:13:43.022 14043.429 - 14105.844: 97.6650% ( 7) 00:13:43.022 14105.844 - 14168.259: 97.7177% ( 6) 00:13:43.022 14168.259 - 14230.674: 97.7616% ( 5) 00:13:43.022 14230.674 - 14293.090: 97.8055% ( 5) 00:13:43.022 14293.090 - 14355.505: 97.8581% ( 6) 00:13:43.022 14355.505 - 14417.920: 97.8757% ( 2) 00:13:43.022 14417.920 - 14480.335: 97.9020% ( 3) 00:13:43.022 14480.335 - 14542.750: 97.9196% ( 2) 00:13:43.022 14542.750 - 14605.166: 97.9459% ( 3) 00:13:43.022 14605.166 - 14667.581: 97.9635% ( 2) 00:13:43.022 14667.581 - 14729.996: 97.9898% ( 3) 00:13:43.022 14729.996 - 14792.411: 98.0074% ( 2) 00:13:43.022 14792.411 - 14854.827: 98.0249% ( 2) 00:13:43.022 14854.827 - 14917.242: 98.0425% ( 2) 00:13:43.022 14917.242 - 14979.657: 98.0688% ( 3) 00:13:43.022 14979.657 - 15042.072: 98.1039% ( 4) 00:13:43.022 15042.072 - 15104.488: 98.1478% ( 5) 00:13:43.022 15104.488 - 15166.903: 98.1829% ( 4) 00:13:43.022 15166.903 - 15229.318: 98.2180% ( 4) 00:13:43.022 15229.318 - 15291.733: 98.2619% ( 5) 00:13:43.022 15291.733 - 15354.149: 98.2971% ( 4) 00:13:43.022 15354.149 - 15416.564: 98.3322% ( 4) 00:13:43.022 15416.564 - 15478.979: 98.3761% ( 5) 00:13:43.022 15478.979 - 15541.394: 98.4112% ( 4) 00:13:43.022 15541.394 - 15603.810: 98.4638% ( 6) 00:13:43.022 15603.810 - 15666.225: 98.4989% ( 4) 00:13:43.022 15666.225 - 15728.640: 98.5341% ( 4) 00:13:43.022 15728.640 - 15791.055: 98.5516% ( 2) 00:13:43.022 15791.055 - 15853.470: 98.5604% ( 1) 00:13:43.022 15853.470 - 15915.886: 98.5867% ( 3) 00:13:43.022 15915.886 - 15978.301: 98.6043% ( 2) 00:13:43.022 15978.301 - 16103.131: 98.6394% ( 4) 00:13:43.022 16103.131 - 16227.962: 98.6745% ( 4) 00:13:43.022 16227.962 - 16352.792: 98.7096% ( 4) 00:13:43.022 16352.792 - 16477.623: 98.7447% ( 4) 00:13:43.022 16477.623 - 16602.453: 98.7798% ( 4) 00:13:43.022 16602.453 - 16727.284: 98.8150% ( 4) 00:13:43.022 16727.284 - 16852.114: 98.8501% ( 4) 00:13:43.022 16852.114 - 16976.945: 98.8764% ( 3) 00:13:43.022 38947.109 - 39196.770: 98.9115% ( 4) 00:13:43.022 39196.770 - 39446.430: 98.9730% ( 7) 00:13:43.022 39446.430 - 39696.091: 99.0256% ( 6) 00:13:43.022 39696.091 - 39945.752: 99.0783% ( 6) 00:13:43.022 39945.752 - 40195.413: 99.1310% ( 6) 00:13:43.022 40195.413 - 40445.074: 99.1836% ( 6) 00:13:43.022 40445.074 - 40694.735: 99.2451% ( 7) 00:13:43.022 40694.735 - 40944.396: 99.2890% ( 5) 00:13:43.022 40944.396 - 41194.057: 99.3416% ( 6) 00:13:43.022 41194.057 - 41443.718: 99.3855% ( 5) 00:13:43.022 41443.718 - 41693.379: 99.4382% ( 6) 00:13:43.022 46187.276 - 46436.937: 99.4645% ( 3) 00:13:43.022 46436.937 - 46686.598: 99.5172% ( 6) 00:13:43.022 46686.598 - 46936.259: 99.5699% ( 6) 00:13:43.022 46936.259 - 47185.920: 99.6138% ( 5) 00:13:43.022 47185.920 - 47435.581: 99.6664% ( 6) 00:13:43.022 47435.581 - 47685.242: 99.7103% ( 5) 00:13:43.022 47685.242 - 47934.903: 99.7630% ( 6) 00:13:43.022 47934.903 - 48184.564: 99.8157% ( 6) 00:13:43.022 48184.564 - 48434.225: 99.8596% ( 5) 00:13:43.022 48434.225 - 48683.886: 99.9210% ( 7) 00:13:43.022 48683.886 - 48933.547: 99.9737% ( 6) 00:13:43.022 48933.547 - 49183.208: 100.0000% ( 3) 00:13:43.022 00:13:43.022 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:43.022 ============================================================================== 00:13:43.022 Range in us Cumulative IO count 00:13:43.022 8550.888 - 8613.303: 0.0351% ( 4) 00:13:43.022 8613.303 - 8675.718: 0.1053% ( 8) 00:13:43.022 8675.718 - 8738.133: 0.2546% ( 17) 00:13:43.022 8738.133 - 8800.549: 0.4828% ( 26) 00:13:43.022 8800.549 - 8862.964: 0.7198% ( 27) 00:13:43.022 8862.964 - 8925.379: 1.0534% ( 38) 00:13:43.022 8925.379 - 8987.794: 1.3694% ( 36) 00:13:43.022 8987.794 - 9050.210: 1.6678% ( 34) 00:13:43.022 9050.210 - 9112.625: 2.0277% ( 41) 00:13:43.022 9112.625 - 9175.040: 2.4930% ( 53) 00:13:43.022 9175.040 - 9237.455: 2.9933% ( 57) 00:13:43.022 9237.455 - 9299.870: 3.5288% ( 61) 00:13:43.022 9299.870 - 9362.286: 4.0291% ( 57) 00:13:43.022 9362.286 - 9424.701: 4.5383% ( 58) 00:13:43.022 9424.701 - 9487.116: 5.0650% ( 60) 00:13:43.022 9487.116 - 9549.531: 5.6268% ( 64) 00:13:43.022 9549.531 - 9611.947: 6.1973% ( 65) 00:13:43.022 9611.947 - 9674.362: 6.7504% ( 63) 00:13:43.022 9674.362 - 9736.777: 7.3209% ( 65) 00:13:43.022 9736.777 - 9799.192: 7.9178% ( 68) 00:13:43.022 9799.192 - 9861.608: 8.6464% ( 83) 00:13:43.022 9861.608 - 9924.023: 9.6910% ( 119) 00:13:43.022 9924.023 - 9986.438: 11.0428% ( 154) 00:13:43.022 9986.438 - 10048.853: 12.7107% ( 190) 00:13:43.022 10048.853 - 10111.269: 14.7384% ( 231) 00:13:43.022 10111.269 - 10173.684: 17.1173% ( 271) 00:13:43.022 10173.684 - 10236.099: 19.8824% ( 315) 00:13:43.022 10236.099 - 10298.514: 22.7879% ( 331) 00:13:43.022 10298.514 - 10360.930: 26.0007% ( 366) 00:13:43.023 10360.930 - 10423.345: 29.3539% ( 382) 00:13:43.023 10423.345 - 10485.760: 32.7774% ( 390) 00:13:43.023 10485.760 - 10548.175: 36.2798% ( 399) 00:13:43.023 10548.175 - 10610.590: 39.8525% ( 407) 00:13:43.023 10610.590 - 10673.006: 43.3989% ( 404) 00:13:43.023 10673.006 - 10735.421: 47.0945% ( 421) 00:13:43.023 10735.421 - 10797.836: 50.8690% ( 430) 00:13:43.023 10797.836 - 10860.251: 54.6173% ( 427) 00:13:43.023 10860.251 - 10922.667: 58.3041% ( 420) 00:13:43.023 10922.667 - 10985.082: 62.1489% ( 438) 00:13:43.023 10985.082 - 11047.497: 65.8269% ( 419) 00:13:43.023 11047.497 - 11109.912: 69.1450% ( 378) 00:13:43.023 11109.912 - 11172.328: 72.1471% ( 342) 00:13:43.023 11172.328 - 11234.743: 74.8157% ( 304) 00:13:43.023 11234.743 - 11297.158: 77.1770% ( 269) 00:13:43.023 11297.158 - 11359.573: 79.2310% ( 234) 00:13:43.023 11359.573 - 11421.989: 80.9164% ( 192) 00:13:43.023 11421.989 - 11484.404: 82.3121% ( 159) 00:13:43.023 11484.404 - 11546.819: 83.3743% ( 121) 00:13:43.023 11546.819 - 11609.234: 84.4628% ( 124) 00:13:43.023 11609.234 - 11671.650: 85.2879% ( 94) 00:13:43.023 11671.650 - 11734.065: 86.1043% ( 93) 00:13:43.023 11734.065 - 11796.480: 86.8241% ( 82) 00:13:43.023 11796.480 - 11858.895: 87.5263% ( 80) 00:13:43.023 11858.895 - 11921.310: 88.0618% ( 61) 00:13:43.023 11921.310 - 11983.726: 88.6324% ( 65) 00:13:43.023 11983.726 - 12046.141: 89.1415% ( 58) 00:13:43.023 12046.141 - 12108.556: 89.6243% ( 55) 00:13:43.023 12108.556 - 12170.971: 90.0895% ( 53) 00:13:43.023 12170.971 - 12233.387: 90.4846% ( 45) 00:13:43.023 12233.387 - 12295.802: 90.8971% ( 47) 00:13:43.023 12295.802 - 12358.217: 91.3360% ( 50) 00:13:43.023 12358.217 - 12420.632: 91.7135% ( 43) 00:13:43.023 12420.632 - 12483.048: 92.1261% ( 47) 00:13:43.023 12483.048 - 12545.463: 92.5562% ( 49) 00:13:43.023 12545.463 - 12607.878: 92.9249% ( 42) 00:13:43.023 12607.878 - 12670.293: 93.2496% ( 37) 00:13:43.023 12670.293 - 12732.709: 93.5832% ( 38) 00:13:43.023 12732.709 - 12795.124: 93.9080% ( 37) 00:13:43.023 12795.124 - 12857.539: 94.2328% ( 37) 00:13:43.023 12857.539 - 12919.954: 94.5576% ( 37) 00:13:43.023 12919.954 - 12982.370: 94.9175% ( 41) 00:13:43.023 12982.370 - 13044.785: 95.2686% ( 40) 00:13:43.023 13044.785 - 13107.200: 95.5934% ( 37) 00:13:43.023 13107.200 - 13169.615: 95.9182% ( 37) 00:13:43.023 13169.615 - 13232.030: 96.1815% ( 30) 00:13:43.023 13232.030 - 13294.446: 96.4273% ( 28) 00:13:43.023 13294.446 - 13356.861: 96.6643% ( 27) 00:13:43.023 13356.861 - 13419.276: 96.8574% ( 22) 00:13:43.023 13419.276 - 13481.691: 96.9979% ( 16) 00:13:43.023 13481.691 - 13544.107: 97.1559% ( 18) 00:13:43.023 13544.107 - 13606.522: 97.2876% ( 15) 00:13:43.023 13606.522 - 13668.937: 97.4280% ( 16) 00:13:43.023 13668.937 - 13731.352: 97.5158% ( 10) 00:13:43.023 13731.352 - 13793.768: 97.5860% ( 8) 00:13:43.023 13793.768 - 13856.183: 97.6387% ( 6) 00:13:43.023 13856.183 - 13918.598: 97.6914% ( 6) 00:13:43.023 13918.598 - 13981.013: 97.7353% ( 5) 00:13:43.023 13981.013 - 14043.429: 97.7528% ( 2) 00:13:43.023 14043.429 - 14105.844: 97.7704% ( 2) 00:13:43.023 14105.844 - 14168.259: 97.7967% ( 3) 00:13:43.023 14168.259 - 14230.674: 97.8143% ( 2) 00:13:43.023 14230.674 - 14293.090: 97.8318% ( 2) 00:13:43.023 14293.090 - 14355.505: 97.8494% ( 2) 00:13:43.023 14355.505 - 14417.920: 97.8757% ( 3) 00:13:43.023 14417.920 - 14480.335: 97.9020% ( 3) 00:13:43.023 14480.335 - 14542.750: 97.9284% ( 3) 00:13:43.023 14542.750 - 14605.166: 97.9635% ( 4) 00:13:43.023 14605.166 - 14667.581: 97.9898% ( 3) 00:13:43.023 14667.581 - 14729.996: 98.0249% ( 4) 00:13:43.023 14729.996 - 14792.411: 98.0513% ( 3) 00:13:43.023 14792.411 - 14854.827: 98.0776% ( 3) 00:13:43.023 14854.827 - 14917.242: 98.1039% ( 3) 00:13:43.023 14917.242 - 14979.657: 98.1390% ( 4) 00:13:43.023 14979.657 - 15042.072: 98.1742% ( 4) 00:13:43.023 15042.072 - 15104.488: 98.2268% ( 6) 00:13:43.023 15104.488 - 15166.903: 98.2707% ( 5) 00:13:43.023 15166.903 - 15229.318: 98.3146% ( 5) 00:13:43.023 15229.318 - 15291.733: 98.3585% ( 5) 00:13:43.023 15291.733 - 15354.149: 98.4112% ( 6) 00:13:43.023 15354.149 - 15416.564: 98.4287% ( 2) 00:13:43.023 15416.564 - 15478.979: 98.4375% ( 1) 00:13:43.023 15478.979 - 15541.394: 98.4638% ( 3) 00:13:43.023 15541.394 - 15603.810: 98.4814% ( 2) 00:13:43.023 15603.810 - 15666.225: 98.4989% ( 2) 00:13:43.023 15666.225 - 15728.640: 98.5165% ( 2) 00:13:43.023 15728.640 - 15791.055: 98.5341% ( 2) 00:13:43.023 15791.055 - 15853.470: 98.5516% ( 2) 00:13:43.023 15853.470 - 15915.886: 98.5692% ( 2) 00:13:43.023 15915.886 - 15978.301: 98.5867% ( 2) 00:13:43.023 15978.301 - 16103.131: 98.6306% ( 5) 00:13:43.023 16103.131 - 16227.962: 98.6657% ( 4) 00:13:43.023 16227.962 - 16352.792: 98.7008% ( 4) 00:13:43.023 16352.792 - 16477.623: 98.7360% ( 4) 00:13:43.023 16477.623 - 16602.453: 98.7711% ( 4) 00:13:43.023 16602.453 - 16727.284: 98.7974% ( 3) 00:13:43.023 16727.284 - 16852.114: 98.8325% ( 4) 00:13:43.023 16852.114 - 16976.945: 98.8676% ( 4) 00:13:43.023 16976.945 - 17101.775: 98.8764% ( 1) 00:13:43.023 35951.177 - 36200.838: 98.9115% ( 4) 00:13:43.023 36200.838 - 36450.499: 98.9554% ( 5) 00:13:43.023 36450.499 - 36700.160: 99.0081% ( 6) 00:13:43.023 36700.160 - 36949.821: 99.0432% ( 4) 00:13:43.023 36949.821 - 37199.482: 99.0871% ( 5) 00:13:43.023 37199.482 - 37449.143: 99.1397% ( 6) 00:13:43.023 37449.143 - 37698.804: 99.1924% ( 6) 00:13:43.023 37698.804 - 37948.465: 99.2363% ( 5) 00:13:43.023 37948.465 - 38198.126: 99.2890% ( 6) 00:13:43.023 38198.126 - 38447.787: 99.3504% ( 7) 00:13:43.023 38447.787 - 38697.448: 99.3943% ( 5) 00:13:43.023 38697.448 - 38947.109: 99.4382% ( 5) 00:13:43.023 43441.006 - 43690.667: 99.4821% ( 5) 00:13:43.023 43690.667 - 43940.328: 99.5260% ( 5) 00:13:43.023 43940.328 - 44189.989: 99.5874% ( 7) 00:13:43.023 44189.989 - 44439.650: 99.6313% ( 5) 00:13:43.023 44439.650 - 44689.310: 99.6928% ( 7) 00:13:43.023 44689.310 - 44938.971: 99.7542% ( 7) 00:13:43.023 44938.971 - 45188.632: 99.8069% ( 6) 00:13:43.023 45188.632 - 45438.293: 99.8596% ( 6) 00:13:43.023 45438.293 - 45687.954: 99.9122% ( 6) 00:13:43.023 45687.954 - 45937.615: 99.9649% ( 6) 00:13:43.023 45937.615 - 46187.276: 100.0000% ( 4) 00:13:43.023 00:13:43.023 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:43.023 ============================================================================== 00:13:43.023 Range in us Cumulative IO count 00:13:43.023 8488.472 - 8550.888: 0.0176% ( 2) 00:13:43.023 8550.888 - 8613.303: 0.0439% ( 3) 00:13:43.023 8613.303 - 8675.718: 0.1317% ( 10) 00:13:43.023 8675.718 - 8738.133: 0.2897% ( 18) 00:13:43.023 8738.133 - 8800.549: 0.5267% ( 27) 00:13:43.023 8800.549 - 8862.964: 0.8076% ( 32) 00:13:43.023 8862.964 - 8925.379: 1.1148% ( 35) 00:13:43.023 8925.379 - 8987.794: 1.4221% ( 35) 00:13:43.023 8987.794 - 9050.210: 1.7644% ( 39) 00:13:43.023 9050.210 - 9112.625: 2.1594% ( 45) 00:13:43.023 9112.625 - 9175.040: 2.5808% ( 48) 00:13:43.023 9175.040 - 9237.455: 3.0460% ( 53) 00:13:43.023 9237.455 - 9299.870: 3.5112% ( 53) 00:13:43.023 9299.870 - 9362.286: 4.0204% ( 58) 00:13:43.023 9362.286 - 9424.701: 4.5734% ( 63) 00:13:43.023 9424.701 - 9487.116: 5.1264% ( 63) 00:13:43.023 9487.116 - 9549.531: 5.6531% ( 60) 00:13:43.023 9549.531 - 9611.947: 6.2237% ( 65) 00:13:43.023 9611.947 - 9674.362: 6.7416% ( 59) 00:13:43.023 9674.362 - 9736.777: 7.2156% ( 54) 00:13:43.023 9736.777 - 9799.192: 7.7862% ( 65) 00:13:43.023 9799.192 - 9861.608: 8.5060% ( 82) 00:13:43.023 9861.608 - 9924.023: 9.5418% ( 118) 00:13:43.023 9924.023 - 9986.438: 10.8409% ( 148) 00:13:43.023 9986.438 - 10048.853: 12.4824% ( 187) 00:13:43.023 10048.853 - 10111.269: 14.5716% ( 238) 00:13:43.023 10111.269 - 10173.684: 16.9505% ( 271) 00:13:43.023 10173.684 - 10236.099: 19.6893% ( 312) 00:13:43.023 10236.099 - 10298.514: 22.5509% ( 326) 00:13:43.023 10298.514 - 10360.930: 25.7374% ( 363) 00:13:43.023 10360.930 - 10423.345: 29.2574% ( 401) 00:13:43.023 10423.345 - 10485.760: 32.6545% ( 387) 00:13:43.023 10485.760 - 10548.175: 36.1921% ( 403) 00:13:43.023 10548.175 - 10610.590: 39.8174% ( 413) 00:13:43.023 10610.590 - 10673.006: 43.2672% ( 393) 00:13:43.023 10673.006 - 10735.421: 46.9452% ( 419) 00:13:43.023 10735.421 - 10797.836: 50.5706% ( 413) 00:13:43.023 10797.836 - 10860.251: 54.4768% ( 445) 00:13:43.023 10860.251 - 10922.667: 58.1548% ( 419) 00:13:43.023 10922.667 - 10985.082: 62.0523% ( 444) 00:13:43.023 10985.082 - 11047.497: 65.7479% ( 421) 00:13:43.023 11047.497 - 11109.912: 69.1099% ( 383) 00:13:43.023 11109.912 - 11172.328: 72.1383% ( 345) 00:13:43.023 11172.328 - 11234.743: 74.8069% ( 304) 00:13:43.023 11234.743 - 11297.158: 77.2121% ( 274) 00:13:43.023 11297.158 - 11359.573: 79.2223% ( 229) 00:13:43.023 11359.573 - 11421.989: 80.9779% ( 200) 00:13:43.023 11421.989 - 11484.404: 82.4263% ( 165) 00:13:43.023 11484.404 - 11546.819: 83.5323% ( 126) 00:13:43.023 11546.819 - 11609.234: 84.5242% ( 113) 00:13:43.023 11609.234 - 11671.650: 85.3494% ( 94) 00:13:43.023 11671.650 - 11734.065: 86.1833% ( 95) 00:13:43.023 11734.065 - 11796.480: 86.9558% ( 88) 00:13:43.023 11796.480 - 11858.895: 87.7195% ( 87) 00:13:43.023 11858.895 - 11921.310: 88.3339% ( 70) 00:13:43.023 11921.310 - 11983.726: 88.9133% ( 66) 00:13:43.023 11983.726 - 12046.141: 89.3961% ( 55) 00:13:43.023 12046.141 - 12108.556: 89.8438% ( 51) 00:13:43.023 12108.556 - 12170.971: 90.2124% ( 42) 00:13:43.023 12170.971 - 12233.387: 90.5811% ( 42) 00:13:43.023 12233.387 - 12295.802: 90.9322% ( 40) 00:13:43.023 12295.802 - 12358.217: 91.3185% ( 44) 00:13:43.023 12358.217 - 12420.632: 91.6959% ( 43) 00:13:43.024 12420.632 - 12483.048: 92.0295% ( 38) 00:13:43.024 12483.048 - 12545.463: 92.4245% ( 45) 00:13:43.024 12545.463 - 12607.878: 92.7844% ( 41) 00:13:43.024 12607.878 - 12670.293: 93.0741% ( 33) 00:13:43.024 12670.293 - 12732.709: 93.3725% ( 34) 00:13:43.024 12732.709 - 12795.124: 93.6886% ( 36) 00:13:43.024 12795.124 - 12857.539: 94.0309% ( 39) 00:13:43.024 12857.539 - 12919.954: 94.3118% ( 32) 00:13:43.024 12919.954 - 12982.370: 94.6015% ( 33) 00:13:43.024 12982.370 - 13044.785: 94.8999% ( 34) 00:13:43.024 13044.785 - 13107.200: 95.2335% ( 38) 00:13:43.024 13107.200 - 13169.615: 95.5232% ( 33) 00:13:43.024 13169.615 - 13232.030: 95.8129% ( 33) 00:13:43.024 13232.030 - 13294.446: 96.0850% ( 31) 00:13:43.024 13294.446 - 13356.861: 96.3395% ( 29) 00:13:43.024 13356.861 - 13419.276: 96.5414% ( 23) 00:13:43.024 13419.276 - 13481.691: 96.7521% ( 24) 00:13:43.024 13481.691 - 13544.107: 96.9101% ( 18) 00:13:43.024 13544.107 - 13606.522: 97.0506% ( 16) 00:13:43.024 13606.522 - 13668.937: 97.1647% ( 13) 00:13:43.024 13668.937 - 13731.352: 97.2525% ( 10) 00:13:43.024 13731.352 - 13793.768: 97.3315% ( 9) 00:13:43.024 13793.768 - 13856.183: 97.4105% ( 9) 00:13:43.024 13856.183 - 13918.598: 97.4719% ( 7) 00:13:43.024 13918.598 - 13981.013: 97.5509% ( 9) 00:13:43.024 13981.013 - 14043.429: 97.6299% ( 9) 00:13:43.024 14043.429 - 14105.844: 97.7001% ( 8) 00:13:43.024 14105.844 - 14168.259: 97.7616% ( 7) 00:13:43.024 14168.259 - 14230.674: 97.8055% ( 5) 00:13:43.024 14230.674 - 14293.090: 97.8581% ( 6) 00:13:43.024 14293.090 - 14355.505: 97.9020% ( 5) 00:13:43.024 14355.505 - 14417.920: 97.9371% ( 4) 00:13:43.024 14417.920 - 14480.335: 97.9635% ( 3) 00:13:43.024 14480.335 - 14542.750: 97.9898% ( 3) 00:13:43.024 14542.750 - 14605.166: 98.0249% ( 4) 00:13:43.024 14605.166 - 14667.581: 98.0513% ( 3) 00:13:43.024 14667.581 - 14729.996: 98.0776% ( 3) 00:13:43.024 14729.996 - 14792.411: 98.1039% ( 3) 00:13:43.024 14792.411 - 14854.827: 98.1390% ( 4) 00:13:43.024 14854.827 - 14917.242: 98.1654% ( 3) 00:13:43.024 14917.242 - 14979.657: 98.1917% ( 3) 00:13:43.024 14979.657 - 15042.072: 98.2268% ( 4) 00:13:43.024 15042.072 - 15104.488: 98.2532% ( 3) 00:13:43.024 15104.488 - 15166.903: 98.2795% ( 3) 00:13:43.024 15166.903 - 15229.318: 98.3058% ( 3) 00:13:43.024 15229.318 - 15291.733: 98.3146% ( 1) 00:13:43.024 15603.810 - 15666.225: 98.3234% ( 1) 00:13:43.024 15666.225 - 15728.640: 98.3409% ( 2) 00:13:43.024 15728.640 - 15791.055: 98.3497% ( 1) 00:13:43.024 15791.055 - 15853.470: 98.3673% ( 2) 00:13:43.024 15853.470 - 15915.886: 98.3936% ( 3) 00:13:43.024 15915.886 - 15978.301: 98.4112% ( 2) 00:13:43.024 15978.301 - 16103.131: 98.4551% ( 5) 00:13:43.024 16103.131 - 16227.962: 98.4814% ( 3) 00:13:43.024 16227.962 - 16352.792: 98.5253% ( 5) 00:13:43.024 16352.792 - 16477.623: 98.5604% ( 4) 00:13:43.024 16477.623 - 16602.453: 98.5955% ( 4) 00:13:43.024 16602.453 - 16727.284: 98.6306% ( 4) 00:13:43.024 16727.284 - 16852.114: 98.6657% ( 4) 00:13:43.024 16852.114 - 16976.945: 98.7008% ( 4) 00:13:43.024 16976.945 - 17101.775: 98.7272% ( 3) 00:13:43.024 17101.775 - 17226.606: 98.7535% ( 3) 00:13:43.024 17226.606 - 17351.436: 98.7886% ( 4) 00:13:43.024 17351.436 - 17476.267: 98.8237% ( 4) 00:13:43.024 17476.267 - 17601.097: 98.8501% ( 3) 00:13:43.024 17601.097 - 17725.928: 98.8764% ( 3) 00:13:43.024 32455.924 - 32705.585: 98.9115% ( 4) 00:13:43.024 32705.585 - 32955.246: 98.9642% ( 6) 00:13:43.024 32955.246 - 33204.907: 99.0256% ( 7) 00:13:43.024 33204.907 - 33454.568: 99.0783% ( 6) 00:13:43.024 33454.568 - 33704.229: 99.1222% ( 5) 00:13:43.024 33704.229 - 33953.890: 99.1749% ( 6) 00:13:43.024 33953.890 - 34203.550: 99.2275% ( 6) 00:13:43.024 34203.550 - 34453.211: 99.2714% ( 5) 00:13:43.024 34453.211 - 34702.872: 99.3241% ( 6) 00:13:43.024 34702.872 - 34952.533: 99.3855% ( 7) 00:13:43.024 34952.533 - 35202.194: 99.4294% ( 5) 00:13:43.024 35202.194 - 35451.855: 99.4382% ( 1) 00:13:43.024 39945.752 - 40195.413: 99.4909% ( 6) 00:13:43.024 40195.413 - 40445.074: 99.5435% ( 6) 00:13:43.024 40445.074 - 40694.735: 99.5874% ( 5) 00:13:43.024 40694.735 - 40944.396: 99.6401% ( 6) 00:13:43.024 40944.396 - 41194.057: 99.6840% ( 5) 00:13:43.024 41194.057 - 41443.718: 99.7367% ( 6) 00:13:43.024 41443.718 - 41693.379: 99.7981% ( 7) 00:13:43.024 41693.379 - 41943.040: 99.8508% ( 6) 00:13:43.024 41943.040 - 42192.701: 99.9034% ( 6) 00:13:43.024 42192.701 - 42442.362: 99.9473% ( 5) 00:13:43.024 42442.362 - 42692.023: 100.0000% ( 6) 00:13:43.024 00:13:43.024 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:43.024 ============================================================================== 00:13:43.024 Range in us Cumulative IO count 00:13:43.024 8613.303 - 8675.718: 0.0524% ( 6) 00:13:43.024 8675.718 - 8738.133: 0.1746% ( 14) 00:13:43.024 8738.133 - 8800.549: 0.3841% ( 24) 00:13:43.024 8800.549 - 8862.964: 0.6285% ( 28) 00:13:43.024 8862.964 - 8925.379: 0.9340% ( 35) 00:13:43.024 8925.379 - 8987.794: 1.2832% ( 40) 00:13:43.024 8987.794 - 9050.210: 1.6760% ( 45) 00:13:43.024 9050.210 - 9112.625: 2.0950% ( 48) 00:13:43.024 9112.625 - 9175.040: 2.5751% ( 55) 00:13:43.024 9175.040 - 9237.455: 3.0290% ( 52) 00:13:43.024 9237.455 - 9299.870: 3.5353% ( 58) 00:13:43.024 9299.870 - 9362.286: 4.0241% ( 56) 00:13:43.024 9362.286 - 9424.701: 4.6351% ( 70) 00:13:43.024 9424.701 - 9487.116: 5.1589% ( 60) 00:13:43.024 9487.116 - 9549.531: 5.7524% ( 68) 00:13:43.024 9549.531 - 9611.947: 6.3024% ( 63) 00:13:43.024 9611.947 - 9674.362: 6.8174% ( 59) 00:13:43.024 9674.362 - 9736.777: 7.3760% ( 64) 00:13:43.024 9736.777 - 9799.192: 7.9434% ( 65) 00:13:43.024 9799.192 - 9861.608: 8.6156% ( 77) 00:13:43.024 9861.608 - 9924.023: 9.5321% ( 105) 00:13:43.024 9924.023 - 9986.438: 10.9026% ( 157) 00:13:43.024 9986.438 - 10048.853: 12.6309% ( 198) 00:13:43.024 10048.853 - 10111.269: 14.6823% ( 235) 00:13:43.024 10111.269 - 10173.684: 17.0304% ( 269) 00:13:43.024 10173.684 - 10236.099: 19.6840% ( 304) 00:13:43.024 10236.099 - 10298.514: 22.6781% ( 343) 00:13:43.024 10298.514 - 10360.930: 25.7856% ( 356) 00:13:43.024 10360.930 - 10423.345: 29.0852% ( 378) 00:13:43.024 10423.345 - 10485.760: 32.5594% ( 398) 00:13:43.024 10485.760 - 10548.175: 35.9550% ( 389) 00:13:43.024 10548.175 - 10610.590: 39.5251% ( 409) 00:13:43.024 10610.590 - 10673.006: 42.9906% ( 397) 00:13:43.024 10673.006 - 10735.421: 46.6742% ( 422) 00:13:43.024 10735.421 - 10797.836: 50.3579% ( 422) 00:13:43.024 10797.836 - 10860.251: 54.1550% ( 435) 00:13:43.024 10860.251 - 10922.667: 57.8561% ( 424) 00:13:43.024 10922.667 - 10985.082: 61.7318% ( 444) 00:13:43.024 10985.082 - 11047.497: 65.4330% ( 424) 00:13:43.024 11047.497 - 11109.912: 68.8635% ( 393) 00:13:43.024 11109.912 - 11172.328: 71.8139% ( 338) 00:13:43.024 11172.328 - 11234.743: 74.5112% ( 309) 00:13:43.024 11234.743 - 11297.158: 76.8593% ( 269) 00:13:43.024 11297.158 - 11359.573: 79.0765% ( 254) 00:13:43.024 11359.573 - 11421.989: 80.8310% ( 201) 00:13:43.024 11421.989 - 11484.404: 82.2888% ( 167) 00:13:43.024 11484.404 - 11546.819: 83.4759% ( 136) 00:13:43.024 11546.819 - 11609.234: 84.5932% ( 128) 00:13:43.024 11609.234 - 11671.650: 85.5185% ( 106) 00:13:43.024 11671.650 - 11734.065: 86.3914% ( 100) 00:13:43.024 11734.065 - 11796.480: 87.1945% ( 92) 00:13:43.024 11796.480 - 11858.895: 87.9626% ( 88) 00:13:43.024 11858.895 - 11921.310: 88.5737% ( 70) 00:13:43.024 11921.310 - 11983.726: 89.1934% ( 71) 00:13:43.024 11983.726 - 12046.141: 89.7608% ( 65) 00:13:43.024 12046.141 - 12108.556: 90.2409% ( 55) 00:13:43.024 12108.556 - 12170.971: 90.6861% ( 51) 00:13:43.024 12170.971 - 12233.387: 91.1051% ( 48) 00:13:43.024 12233.387 - 12295.802: 91.4368% ( 38) 00:13:43.024 12295.802 - 12358.217: 91.8209% ( 44) 00:13:43.024 12358.217 - 12420.632: 92.1439% ( 37) 00:13:43.024 12420.632 - 12483.048: 92.4843% ( 39) 00:13:43.024 12483.048 - 12545.463: 92.7723% ( 33) 00:13:43.024 12545.463 - 12607.878: 93.0779% ( 35) 00:13:43.024 12607.878 - 12670.293: 93.3572% ( 32) 00:13:43.025 12670.293 - 12732.709: 93.6191% ( 30) 00:13:43.025 12732.709 - 12795.124: 93.8809% ( 30) 00:13:43.025 12795.124 - 12857.539: 94.1079% ( 26) 00:13:43.025 12857.539 - 12919.954: 94.3261% ( 25) 00:13:43.025 12919.954 - 12982.370: 94.5531% ( 26) 00:13:43.025 12982.370 - 13044.785: 94.7800% ( 26) 00:13:43.025 13044.785 - 13107.200: 95.0070% ( 26) 00:13:43.025 13107.200 - 13169.615: 95.2776% ( 31) 00:13:43.025 13169.615 - 13232.030: 95.5307% ( 29) 00:13:43.025 13232.030 - 13294.446: 95.7664% ( 27) 00:13:43.025 13294.446 - 13356.861: 95.9759% ( 24) 00:13:43.025 13356.861 - 13419.276: 96.1679% ( 22) 00:13:43.025 13419.276 - 13481.691: 96.3163% ( 17) 00:13:43.025 13481.691 - 13544.107: 96.4909% ( 20) 00:13:43.025 13544.107 - 13606.522: 96.6306% ( 16) 00:13:43.025 13606.522 - 13668.937: 96.7703% ( 16) 00:13:43.025 13668.937 - 13731.352: 96.9099% ( 16) 00:13:43.025 13731.352 - 13793.768: 97.0409% ( 15) 00:13:43.025 13793.768 - 13856.183: 97.1543% ( 13) 00:13:43.025 13856.183 - 13918.598: 97.2416% ( 10) 00:13:43.025 13918.598 - 13981.013: 97.3551% ( 13) 00:13:43.025 13981.013 - 14043.429: 97.4424% ( 10) 00:13:43.025 14043.429 - 14105.844: 97.5471% ( 12) 00:13:43.025 14105.844 - 14168.259: 97.6432% ( 11) 00:13:43.025 14168.259 - 14230.674: 97.7392% ( 11) 00:13:43.025 14230.674 - 14293.090: 97.8090% ( 8) 00:13:43.025 14293.090 - 14355.505: 97.8788% ( 8) 00:13:43.025 14355.505 - 14417.920: 97.9487% ( 8) 00:13:43.025 14417.920 - 14480.335: 98.0185% ( 8) 00:13:43.025 14480.335 - 14542.750: 98.0796% ( 7) 00:13:43.025 14542.750 - 14605.166: 98.1058% ( 3) 00:13:43.025 14605.166 - 14667.581: 98.1582% ( 6) 00:13:43.025 14667.581 - 14729.996: 98.2018% ( 5) 00:13:43.025 14729.996 - 14792.411: 98.2455% ( 5) 00:13:43.025 14792.411 - 14854.827: 98.2891% ( 5) 00:13:43.025 14854.827 - 14917.242: 98.2978% ( 1) 00:13:43.025 14917.242 - 14979.657: 98.3153% ( 2) 00:13:43.025 14979.657 - 15042.072: 98.3240% ( 1) 00:13:43.025 16227.962 - 16352.792: 98.3328% ( 1) 00:13:43.025 16352.792 - 16477.623: 98.3677% ( 4) 00:13:43.025 16477.623 - 16602.453: 98.4026% ( 4) 00:13:43.025 16602.453 - 16727.284: 98.4462% ( 5) 00:13:43.025 16727.284 - 16852.114: 98.4724% ( 3) 00:13:43.025 16852.114 - 16976.945: 98.4986% ( 3) 00:13:43.025 16976.945 - 17101.775: 98.5335% ( 4) 00:13:43.025 17101.775 - 17226.606: 98.5684% ( 4) 00:13:43.025 17226.606 - 17351.436: 98.6034% ( 4) 00:13:43.025 17351.436 - 17476.267: 98.6295% ( 3) 00:13:43.025 17476.267 - 17601.097: 98.6557% ( 3) 00:13:43.025 17601.097 - 17725.928: 98.6906% ( 4) 00:13:43.025 17725.928 - 17850.758: 98.7256% ( 4) 00:13:43.025 17850.758 - 17975.589: 98.7605% ( 4) 00:13:43.025 17975.589 - 18100.419: 98.7867% ( 3) 00:13:43.025 18100.419 - 18225.250: 98.8216% ( 4) 00:13:43.025 18225.250 - 18350.080: 98.8565% ( 4) 00:13:43.025 18350.080 - 18474.910: 98.8827% ( 3) 00:13:43.025 23967.451 - 24092.282: 98.9001% ( 2) 00:13:43.025 24092.282 - 24217.112: 98.9263% ( 3) 00:13:43.025 24217.112 - 24341.943: 98.9612% ( 4) 00:13:43.025 24341.943 - 24466.773: 98.9874% ( 3) 00:13:43.025 24466.773 - 24591.604: 99.0136% ( 3) 00:13:43.025 24591.604 - 24716.434: 99.0398% ( 3) 00:13:43.025 24716.434 - 24841.265: 99.0660% ( 3) 00:13:43.025 24841.265 - 24966.095: 99.0922% ( 3) 00:13:43.025 24966.095 - 25090.926: 99.1271% ( 4) 00:13:43.025 25090.926 - 25215.756: 99.1533% ( 3) 00:13:43.025 25215.756 - 25340.587: 99.1795% ( 3) 00:13:43.025 25340.587 - 25465.417: 99.2057% ( 3) 00:13:43.025 25465.417 - 25590.248: 99.2318% ( 3) 00:13:43.025 25590.248 - 25715.078: 99.2580% ( 3) 00:13:43.025 25715.078 - 25839.909: 99.2929% ( 4) 00:13:43.025 25839.909 - 25964.739: 99.3191% ( 3) 00:13:43.025 25964.739 - 26089.570: 99.3453% ( 3) 00:13:43.025 26089.570 - 26214.400: 99.3715% ( 3) 00:13:43.025 26214.400 - 26339.230: 99.3977% ( 3) 00:13:43.025 26339.230 - 26464.061: 99.4239% ( 3) 00:13:43.025 26464.061 - 26588.891: 99.4413% ( 2) 00:13:43.025 31207.619 - 31332.450: 99.4675% ( 3) 00:13:43.025 31332.450 - 31457.280: 99.4850% ( 2) 00:13:43.025 31457.280 - 31582.110: 99.5112% ( 3) 00:13:43.025 31582.110 - 31706.941: 99.5374% ( 3) 00:13:43.025 31706.941 - 31831.771: 99.5723% ( 4) 00:13:43.025 31831.771 - 31956.602: 99.5985% ( 3) 00:13:43.025 31956.602 - 32206.263: 99.6421% ( 5) 00:13:43.025 32206.263 - 32455.924: 99.6945% ( 6) 00:13:43.025 32455.924 - 32705.585: 99.7381% ( 5) 00:13:43.025 32705.585 - 32955.246: 99.7992% ( 7) 00:13:43.025 32955.246 - 33204.907: 99.8429% ( 5) 00:13:43.025 33204.907 - 33454.568: 99.8953% ( 6) 00:13:43.025 33454.568 - 33704.229: 99.9476% ( 6) 00:13:43.025 33704.229 - 33953.890: 100.0000% ( 6) 00:13:43.025 00:13:43.025 13:39:36 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:13:44.402 Initializing NVMe Controllers 00:13:44.402 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:44.402 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:44.402 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:44.402 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:44.402 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:44.402 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:44.402 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:44.402 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:44.402 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:44.402 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:44.402 Initialization complete. Launching workers. 00:13:44.402 ======================================================== 00:13:44.402 Latency(us) 00:13:44.402 Device Information : IOPS MiB/s Average min max 00:13:44.402 PCIE (0000:00:13.0) NSID 1 from core 0: 10081.96 118.15 12735.17 9749.20 49391.17 00:13:44.402 PCIE (0000:00:10.0) NSID 1 from core 0: 10081.96 118.15 12699.53 9615.33 45946.18 00:13:44.402 PCIE (0000:00:11.0) NSID 1 from core 0: 10081.96 118.15 12666.99 9720.89 42416.08 00:13:44.402 PCIE (0000:00:12.0) NSID 1 from core 0: 10081.96 118.15 12636.00 9994.69 40040.20 00:13:44.402 PCIE (0000:00:12.0) NSID 2 from core 0: 10081.96 118.15 12603.56 9818.97 36905.31 00:13:44.402 PCIE (0000:00:12.0) NSID 3 from core 0: 10081.96 118.15 12571.07 9909.00 33464.24 00:13:44.402 ======================================================== 00:13:44.402 Total : 60491.73 708.89 12652.05 9615.33 49391.17 00:13:44.402 00:13:44.402 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:44.402 ================================================================================= 00:13:44.402 1.00000% : 10360.930us 00:13:44.402 10.00000% : 10922.667us 00:13:44.402 25.00000% : 11359.573us 00:13:44.402 50.00000% : 11921.310us 00:13:44.402 75.00000% : 12857.539us 00:13:44.402 90.00000% : 15229.318us 00:13:44.402 95.00000% : 15915.886us 00:13:44.402 98.00000% : 17601.097us 00:13:44.402 99.00000% : 37199.482us 00:13:44.402 99.50000% : 47185.920us 00:13:44.402 99.90000% : 48933.547us 00:13:44.402 99.99000% : 49432.869us 00:13:44.402 99.99900% : 49432.869us 00:13:44.402 99.99990% : 49432.869us 00:13:44.402 99.99999% : 49432.869us 00:13:44.402 00:13:44.402 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:44.402 ================================================================================= 00:13:44.402 1.00000% : 10111.269us 00:13:44.402 10.00000% : 10797.836us 00:13:44.402 25.00000% : 11297.158us 00:13:44.402 50.00000% : 11921.310us 00:13:44.402 75.00000% : 12857.539us 00:13:44.402 90.00000% : 15291.733us 00:13:44.402 95.00000% : 16103.131us 00:13:44.402 98.00000% : 17601.097us 00:13:44.402 99.00000% : 35951.177us 00:13:44.402 99.50000% : 43690.667us 00:13:44.402 99.90000% : 45687.954us 00:13:44.402 99.99000% : 45937.615us 00:13:44.402 99.99900% : 46187.276us 00:13:44.402 99.99990% : 46187.276us 00:13:44.402 99.99999% : 46187.276us 00:13:44.402 00:13:44.402 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:44.402 ================================================================================= 00:13:44.402 1.00000% : 10298.514us 00:13:44.402 10.00000% : 10860.251us 00:13:44.402 25.00000% : 11359.573us 00:13:44.402 50.00000% : 11921.310us 00:13:44.402 75.00000% : 12857.539us 00:13:44.402 90.00000% : 15291.733us 00:13:44.402 95.00000% : 16103.131us 00:13:44.402 98.00000% : 17725.928us 00:13:44.402 99.00000% : 32955.246us 00:13:44.402 99.50000% : 40445.074us 00:13:44.402 99.90000% : 42192.701us 00:13:44.402 99.99000% : 42442.362us 00:13:44.402 99.99900% : 42442.362us 00:13:44.402 99.99990% : 42442.362us 00:13:44.402 99.99999% : 42442.362us 00:13:44.402 00:13:44.402 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:44.402 ================================================================================= 00:13:44.402 1.00000% : 10298.514us 00:13:44.402 10.00000% : 10922.667us 00:13:44.402 25.00000% : 11297.158us 00:13:44.402 50.00000% : 11921.310us 00:13:44.402 75.00000% : 12857.539us 00:13:44.402 90.00000% : 15229.318us 00:13:44.402 95.00000% : 16103.131us 00:13:44.402 98.00000% : 17601.097us 00:13:44.402 99.00000% : 30583.467us 00:13:44.402 99.50000% : 38198.126us 00:13:44.402 99.90000% : 39696.091us 00:13:44.402 99.99000% : 40195.413us 00:13:44.402 99.99900% : 40195.413us 00:13:44.402 99.99990% : 40195.413us 00:13:44.402 99.99999% : 40195.413us 00:13:44.402 00:13:44.402 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:44.402 ================================================================================= 00:13:44.402 1.00000% : 10298.514us 00:13:44.402 10.00000% : 10860.251us 00:13:44.402 25.00000% : 11297.158us 00:13:44.402 50.00000% : 11921.310us 00:13:44.402 75.00000% : 12857.539us 00:13:44.402 90.00000% : 15229.318us 00:13:44.402 95.00000% : 16103.131us 00:13:44.402 98.00000% : 17975.589us 00:13:44.402 99.00000% : 27337.874us 00:13:44.402 99.50000% : 34952.533us 00:13:44.402 99.90000% : 36700.160us 00:13:44.402 99.99000% : 36949.821us 00:13:44.402 99.99900% : 36949.821us 00:13:44.402 99.99990% : 36949.821us 00:13:44.402 99.99999% : 36949.821us 00:13:44.402 00:13:44.402 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:44.402 ================================================================================= 00:13:44.402 1.00000% : 10298.514us 00:13:44.402 10.00000% : 10860.251us 00:13:44.402 25.00000% : 11297.158us 00:13:44.402 50.00000% : 11921.310us 00:13:44.402 75.00000% : 12857.539us 00:13:44.402 90.00000% : 15229.318us 00:13:44.402 95.00000% : 16103.131us 00:13:44.402 98.00000% : 18225.250us 00:13:44.402 99.00000% : 24092.282us 00:13:44.402 99.50000% : 31457.280us 00:13:44.402 99.90000% : 33204.907us 00:13:44.402 99.99000% : 33454.568us 00:13:44.402 99.99900% : 33704.229us 00:13:44.402 99.99990% : 33704.229us 00:13:44.402 99.99999% : 33704.229us 00:13:44.402 00:13:44.402 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:44.402 ============================================================================== 00:13:44.402 Range in us Cumulative IO count 00:13:44.402 9736.777 - 9799.192: 0.0396% ( 4) 00:13:44.402 9799.192 - 9861.608: 0.0791% ( 4) 00:13:44.402 9861.608 - 9924.023: 0.1187% ( 4) 00:13:44.402 9924.023 - 9986.438: 0.1681% ( 5) 00:13:44.402 9986.438 - 10048.853: 0.2176% ( 5) 00:13:44.402 10048.853 - 10111.269: 0.3165% ( 10) 00:13:44.402 10111.269 - 10173.684: 0.5835% ( 27) 00:13:44.402 10173.684 - 10236.099: 0.7318% ( 15) 00:13:44.402 10236.099 - 10298.514: 0.9790% ( 25) 00:13:44.402 10298.514 - 10360.930: 1.4339% ( 46) 00:13:44.402 10360.930 - 10423.345: 2.3932% ( 97) 00:13:44.402 10423.345 - 10485.760: 3.2634% ( 88) 00:13:44.402 10485.760 - 10548.175: 4.1535% ( 90) 00:13:44.402 10548.175 - 10610.590: 5.2710% ( 113) 00:13:44.402 10610.590 - 10673.006: 6.5368% ( 128) 00:13:44.402 10673.006 - 10735.421: 7.4763% ( 95) 00:13:44.402 10735.421 - 10797.836: 8.5146% ( 105) 00:13:44.402 10797.836 - 10860.251: 9.4640% ( 96) 00:13:44.402 10860.251 - 10922.667: 10.8287% ( 138) 00:13:44.402 10922.667 - 10985.082: 12.4802% ( 167) 00:13:44.402 10985.082 - 11047.497: 14.6361% ( 218) 00:13:44.402 11047.497 - 11109.912: 17.0589% ( 245) 00:13:44.402 11109.912 - 11172.328: 19.7983% ( 277) 00:13:44.402 11172.328 - 11234.743: 22.5277% ( 276) 00:13:44.402 11234.743 - 11297.158: 24.7725% ( 227) 00:13:44.402 11297.158 - 11359.573: 27.3536% ( 261) 00:13:44.402 11359.573 - 11421.989: 30.3995% ( 308) 00:13:44.402 11421.989 - 11484.404: 32.9707% ( 260) 00:13:44.402 11484.404 - 11546.819: 35.9771% ( 304) 00:13:44.402 11546.819 - 11609.234: 38.5285% ( 258) 00:13:44.402 11609.234 - 11671.650: 41.3667% ( 287) 00:13:44.402 11671.650 - 11734.065: 44.0961% ( 276) 00:13:44.402 11734.065 - 11796.480: 46.6475% ( 258) 00:13:44.402 11796.480 - 11858.895: 49.1792% ( 256) 00:13:44.403 11858.895 - 11921.310: 51.4339% ( 228) 00:13:44.403 11921.310 - 11983.726: 53.5799% ( 217) 00:13:44.403 11983.726 - 12046.141: 55.6764% ( 212) 00:13:44.403 12046.141 - 12108.556: 57.7828% ( 213) 00:13:44.403 12108.556 - 12170.971: 59.9090% ( 215) 00:13:44.403 12170.971 - 12233.387: 61.8671% ( 198) 00:13:44.403 12233.387 - 12295.802: 63.7856% ( 194) 00:13:44.403 12295.802 - 12358.217: 65.7041% ( 194) 00:13:44.403 12358.217 - 12420.632: 67.4842% ( 180) 00:13:44.403 12420.632 - 12483.048: 69.0071% ( 154) 00:13:44.403 12483.048 - 12545.463: 70.3323% ( 134) 00:13:44.403 12545.463 - 12607.878: 71.6475% ( 133) 00:13:44.403 12607.878 - 12670.293: 72.7551% ( 112) 00:13:44.403 12670.293 - 12732.709: 73.8528% ( 111) 00:13:44.403 12732.709 - 12795.124: 74.7330% ( 89) 00:13:44.403 12795.124 - 12857.539: 75.6527% ( 93) 00:13:44.403 12857.539 - 12919.954: 76.4636% ( 82) 00:13:44.403 12919.954 - 12982.370: 77.1460% ( 69) 00:13:44.403 12982.370 - 13044.785: 77.8481% ( 71) 00:13:44.403 13044.785 - 13107.200: 78.4612% ( 62) 00:13:44.403 13107.200 - 13169.615: 78.9359% ( 48) 00:13:44.403 13169.615 - 13232.030: 79.4007% ( 47) 00:13:44.403 13232.030 - 13294.446: 79.8358% ( 44) 00:13:44.403 13294.446 - 13356.861: 80.1919% ( 36) 00:13:44.403 13356.861 - 13419.276: 80.3995% ( 21) 00:13:44.403 13419.276 - 13481.691: 80.5676% ( 17) 00:13:44.403 13481.691 - 13544.107: 80.8248% ( 26) 00:13:44.403 13544.107 - 13606.522: 81.1709% ( 35) 00:13:44.403 13606.522 - 13668.937: 81.3983% ( 23) 00:13:44.403 13668.937 - 13731.352: 81.5467% ( 15) 00:13:44.403 13731.352 - 13793.768: 81.7148% ( 17) 00:13:44.403 13793.768 - 13856.183: 81.8631% ( 15) 00:13:44.403 13856.183 - 13918.598: 82.0312% ( 17) 00:13:44.403 13918.598 - 13981.013: 82.1895% ( 16) 00:13:44.403 13981.013 - 14043.429: 82.4268% ( 24) 00:13:44.403 14043.429 - 14105.844: 82.8026% ( 38) 00:13:44.403 14105.844 - 14168.259: 83.0202% ( 22) 00:13:44.403 14168.259 - 14230.674: 83.1388% ( 12) 00:13:44.403 14230.674 - 14293.090: 83.3267% ( 19) 00:13:44.403 14293.090 - 14355.505: 83.6630% ( 34) 00:13:44.403 14355.505 - 14417.920: 84.0190% ( 36) 00:13:44.403 14417.920 - 14480.335: 84.4244% ( 41) 00:13:44.403 14480.335 - 14542.750: 84.9782% ( 56) 00:13:44.403 14542.750 - 14605.166: 85.4628% ( 49) 00:13:44.403 14605.166 - 14667.581: 85.9573% ( 50) 00:13:44.403 14667.581 - 14729.996: 86.4122% ( 46) 00:13:44.403 14729.996 - 14792.411: 86.8275% ( 42) 00:13:44.403 14792.411 - 14854.827: 87.2528% ( 43) 00:13:44.403 14854.827 - 14917.242: 87.7275% ( 48) 00:13:44.403 14917.242 - 14979.657: 88.3900% ( 67) 00:13:44.403 14979.657 - 15042.072: 88.9241% ( 54) 00:13:44.403 15042.072 - 15104.488: 89.4086% ( 49) 00:13:44.403 15104.488 - 15166.903: 89.9624% ( 56) 00:13:44.403 15166.903 - 15229.318: 90.5162% ( 56) 00:13:44.403 15229.318 - 15291.733: 91.0700% ( 56) 00:13:44.403 15291.733 - 15354.149: 91.6634% ( 60) 00:13:44.403 15354.149 - 15416.564: 92.1479% ( 49) 00:13:44.403 15416.564 - 15478.979: 92.6424% ( 50) 00:13:44.403 15478.979 - 15541.394: 93.1566% ( 52) 00:13:44.403 15541.394 - 15603.810: 93.6116% ( 46) 00:13:44.403 15603.810 - 15666.225: 93.9972% ( 39) 00:13:44.403 15666.225 - 15728.640: 94.3532% ( 36) 00:13:44.403 15728.640 - 15791.055: 94.5807% ( 23) 00:13:44.403 15791.055 - 15853.470: 94.7983% ( 22) 00:13:44.403 15853.470 - 15915.886: 95.0059% ( 21) 00:13:44.403 15915.886 - 15978.301: 95.1839% ( 18) 00:13:44.403 15978.301 - 16103.131: 95.6290% ( 45) 00:13:44.403 16103.131 - 16227.962: 95.9256% ( 30) 00:13:44.403 16227.962 - 16352.792: 96.1926% ( 27) 00:13:44.403 16352.792 - 16477.623: 96.4201% ( 23) 00:13:44.403 16477.623 - 16602.453: 96.5981% ( 18) 00:13:44.403 16602.453 - 16727.284: 96.8157% ( 22) 00:13:44.403 16727.284 - 16852.114: 97.0134% ( 20) 00:13:44.403 16852.114 - 16976.945: 97.2310% ( 22) 00:13:44.403 16976.945 - 17101.775: 97.4486% ( 22) 00:13:44.403 17101.775 - 17226.606: 97.6266% ( 18) 00:13:44.403 17226.606 - 17351.436: 97.7848% ( 16) 00:13:44.403 17351.436 - 17476.267: 97.9134% ( 13) 00:13:44.403 17476.267 - 17601.097: 98.0123% ( 10) 00:13:44.403 17601.097 - 17725.928: 98.0815% ( 7) 00:13:44.403 17725.928 - 17850.758: 98.1013% ( 2) 00:13:44.403 18474.910 - 18599.741: 98.1804% ( 8) 00:13:44.403 18599.741 - 18724.571: 98.2496% ( 7) 00:13:44.403 18724.571 - 18849.402: 98.3089% ( 6) 00:13:44.403 18849.402 - 18974.232: 98.3584% ( 5) 00:13:44.403 18974.232 - 19099.063: 98.4375% ( 8) 00:13:44.403 19099.063 - 19223.893: 98.4968% ( 6) 00:13:44.403 19223.893 - 19348.724: 98.5661% ( 7) 00:13:44.403 19348.724 - 19473.554: 98.6254% ( 6) 00:13:44.403 19473.554 - 19598.385: 98.6847% ( 6) 00:13:44.403 19598.385 - 19723.215: 98.7342% ( 5) 00:13:44.403 35701.516 - 35951.177: 98.7638% ( 3) 00:13:44.403 35951.177 - 36200.838: 98.8133% ( 5) 00:13:44.403 36200.838 - 36450.499: 98.8627% ( 5) 00:13:44.403 36450.499 - 36700.160: 98.9122% ( 5) 00:13:44.403 36700.160 - 36949.821: 98.9715% ( 6) 00:13:44.403 36949.821 - 37199.482: 99.0210% ( 5) 00:13:44.403 37199.482 - 37449.143: 99.0704% ( 5) 00:13:44.403 37449.143 - 37698.804: 99.1297% ( 6) 00:13:44.403 37698.804 - 37948.465: 99.1792% ( 5) 00:13:44.403 37948.465 - 38198.126: 99.2286% ( 5) 00:13:44.403 38198.126 - 38447.787: 99.2781% ( 5) 00:13:44.403 38447.787 - 38697.448: 99.3275% ( 5) 00:13:44.403 38697.448 - 38947.109: 99.3671% ( 4) 00:13:44.403 46436.937 - 46686.598: 99.4066% ( 4) 00:13:44.403 46686.598 - 46936.259: 99.4660% ( 6) 00:13:44.403 46936.259 - 47185.920: 99.5154% ( 5) 00:13:44.403 47185.920 - 47435.581: 99.5649% ( 5) 00:13:44.403 47435.581 - 47685.242: 99.6242% ( 6) 00:13:44.403 47685.242 - 47934.903: 99.6737% ( 5) 00:13:44.403 47934.903 - 48184.564: 99.7330% ( 6) 00:13:44.403 48184.564 - 48434.225: 99.7824% ( 5) 00:13:44.403 48434.225 - 48683.886: 99.8418% ( 6) 00:13:44.403 48683.886 - 48933.547: 99.9011% ( 6) 00:13:44.403 48933.547 - 49183.208: 99.9506% ( 5) 00:13:44.403 49183.208 - 49432.869: 100.0000% ( 5) 00:13:44.403 00:13:44.403 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:44.403 ============================================================================== 00:13:44.403 Range in us Cumulative IO count 00:13:44.403 9611.947 - 9674.362: 0.2077% ( 21) 00:13:44.403 9674.362 - 9736.777: 0.2373% ( 3) 00:13:44.403 9736.777 - 9799.192: 0.2472% ( 1) 00:13:44.403 9861.608 - 9924.023: 0.2868% ( 4) 00:13:44.403 9924.023 - 9986.438: 0.3758% ( 9) 00:13:44.403 9986.438 - 10048.853: 0.8109% ( 44) 00:13:44.403 10048.853 - 10111.269: 1.1966% ( 39) 00:13:44.403 10111.269 - 10173.684: 1.7306% ( 54) 00:13:44.403 10173.684 - 10236.099: 2.4723% ( 75) 00:13:44.403 10236.099 - 10298.514: 3.0854% ( 62) 00:13:44.403 10298.514 - 10360.930: 3.6788% ( 60) 00:13:44.403 10360.930 - 10423.345: 4.2919% ( 62) 00:13:44.403 10423.345 - 10485.760: 5.0040% ( 72) 00:13:44.403 10485.760 - 10548.175: 5.6270% ( 63) 00:13:44.403 10548.175 - 10610.590: 6.3588% ( 74) 00:13:44.403 10610.590 - 10673.006: 7.2290% ( 88) 00:13:44.403 10673.006 - 10735.421: 8.3267% ( 111) 00:13:44.403 10735.421 - 10797.836: 10.2453% ( 194) 00:13:44.403 10797.836 - 10860.251: 11.4517% ( 122) 00:13:44.403 10860.251 - 10922.667: 13.2417% ( 181) 00:13:44.403 10922.667 - 10985.082: 15.1206% ( 190) 00:13:44.403 10985.082 - 11047.497: 17.1974% ( 210) 00:13:44.403 11047.497 - 11109.912: 19.0170% ( 184) 00:13:44.403 11109.912 - 11172.328: 21.2520% ( 226) 00:13:44.403 11172.328 - 11234.743: 23.1804% ( 195) 00:13:44.403 11234.743 - 11297.158: 25.7714% ( 262) 00:13:44.403 11297.158 - 11359.573: 28.3525% ( 261) 00:13:44.403 11359.573 - 11421.989: 30.8940% ( 257) 00:13:44.403 11421.989 - 11484.404: 33.4157% ( 255) 00:13:44.403 11484.404 - 11546.819: 35.6606% ( 227) 00:13:44.403 11546.819 - 11609.234: 37.7571% ( 212) 00:13:44.403 11609.234 - 11671.650: 39.9426% ( 221) 00:13:44.403 11671.650 - 11734.065: 42.4347% ( 252) 00:13:44.403 11734.065 - 11796.480: 45.0455% ( 264) 00:13:44.403 11796.480 - 11858.895: 47.5079% ( 249) 00:13:44.403 11858.895 - 11921.310: 50.0396% ( 256) 00:13:44.403 11921.310 - 11983.726: 52.4723% ( 246) 00:13:44.403 11983.726 - 12046.141: 54.8457% ( 240) 00:13:44.404 12046.141 - 12108.556: 57.0510% ( 223) 00:13:44.404 12108.556 - 12170.971: 59.3157% ( 229) 00:13:44.404 12170.971 - 12233.387: 61.2243% ( 193) 00:13:44.404 12233.387 - 12295.802: 63.0241% ( 182) 00:13:44.404 12295.802 - 12358.217: 64.7350% ( 173) 00:13:44.404 12358.217 - 12420.632: 66.4260% ( 171) 00:13:44.404 12420.632 - 12483.048: 67.9786% ( 157) 00:13:44.404 12483.048 - 12545.463: 69.3532% ( 139) 00:13:44.404 12545.463 - 12607.878: 70.6388% ( 130) 00:13:44.404 12607.878 - 12670.293: 71.7860% ( 116) 00:13:44.404 12670.293 - 12732.709: 72.7848% ( 101) 00:13:44.404 12732.709 - 12795.124: 73.9616% ( 119) 00:13:44.404 12795.124 - 12857.539: 75.0000% ( 105) 00:13:44.404 12857.539 - 12919.954: 75.8505% ( 86) 00:13:44.404 12919.954 - 12982.370: 76.6515% ( 81) 00:13:44.404 12982.370 - 13044.785: 77.4525% ( 81) 00:13:44.404 13044.785 - 13107.200: 78.2239% ( 78) 00:13:44.404 13107.200 - 13169.615: 78.8568% ( 64) 00:13:44.404 13169.615 - 13232.030: 79.4798% ( 63) 00:13:44.404 13232.030 - 13294.446: 79.9743% ( 50) 00:13:44.404 13294.446 - 13356.861: 80.4094% ( 44) 00:13:44.404 13356.861 - 13419.276: 80.7654% ( 36) 00:13:44.404 13419.276 - 13481.691: 81.0324% ( 27) 00:13:44.404 13481.691 - 13544.107: 81.2896% ( 26) 00:13:44.404 13544.107 - 13606.522: 81.4775% ( 19) 00:13:44.404 13606.522 - 13668.937: 81.6159% ( 14) 00:13:44.404 13668.937 - 13731.352: 81.7939% ( 18) 00:13:44.404 13731.352 - 13793.768: 81.9620% ( 17) 00:13:44.404 13793.768 - 13856.183: 82.1203% ( 16) 00:13:44.404 13856.183 - 13918.598: 82.3873% ( 27) 00:13:44.404 13918.598 - 13981.013: 82.5752% ( 19) 00:13:44.404 13981.013 - 14043.429: 82.9213% ( 35) 00:13:44.404 14043.429 - 14105.844: 83.1191% ( 20) 00:13:44.404 14105.844 - 14168.259: 83.3762% ( 26) 00:13:44.404 14168.259 - 14230.674: 83.5740% ( 20) 00:13:44.404 14230.674 - 14293.090: 83.6926% ( 12) 00:13:44.404 14293.090 - 14355.505: 83.9399% ( 25) 00:13:44.404 14355.505 - 14417.920: 84.3750% ( 44) 00:13:44.404 14417.920 - 14480.335: 84.6915% ( 32) 00:13:44.404 14480.335 - 14542.750: 85.0079% ( 32) 00:13:44.404 14542.750 - 14605.166: 85.4233% ( 42) 00:13:44.404 14605.166 - 14667.581: 85.7199% ( 30) 00:13:44.404 14667.581 - 14729.996: 86.1452% ( 43) 00:13:44.404 14729.996 - 14792.411: 86.4814% ( 34) 00:13:44.404 14792.411 - 14854.827: 86.9561% ( 48) 00:13:44.404 14854.827 - 14917.242: 87.4407% ( 49) 00:13:44.404 14917.242 - 14979.657: 87.9450% ( 51) 00:13:44.404 14979.657 - 15042.072: 88.4494% ( 51) 00:13:44.404 15042.072 - 15104.488: 88.9241% ( 48) 00:13:44.404 15104.488 - 15166.903: 89.3888% ( 47) 00:13:44.404 15166.903 - 15229.318: 89.8734% ( 49) 00:13:44.404 15229.318 - 15291.733: 90.3481% ( 48) 00:13:44.404 15291.733 - 15354.149: 90.8030% ( 46) 00:13:44.404 15354.149 - 15416.564: 91.2777% ( 48) 00:13:44.404 15416.564 - 15478.979: 91.7326% ( 46) 00:13:44.404 15478.979 - 15541.394: 92.1479% ( 42) 00:13:44.404 15541.394 - 15603.810: 92.6028% ( 46) 00:13:44.404 15603.810 - 15666.225: 93.0874% ( 49) 00:13:44.404 15666.225 - 15728.640: 93.5324% ( 45) 00:13:44.404 15728.640 - 15791.055: 93.9280% ( 40) 00:13:44.404 15791.055 - 15853.470: 94.3137% ( 39) 00:13:44.404 15853.470 - 15915.886: 94.5312% ( 22) 00:13:44.404 15915.886 - 15978.301: 94.7884% ( 26) 00:13:44.404 15978.301 - 16103.131: 95.3817% ( 60) 00:13:44.404 16103.131 - 16227.962: 95.9256% ( 55) 00:13:44.404 16227.962 - 16352.792: 96.3311% ( 41) 00:13:44.404 16352.792 - 16477.623: 96.5585% ( 23) 00:13:44.404 16477.623 - 16602.453: 96.7860% ( 23) 00:13:44.404 16602.453 - 16727.284: 97.0233% ( 24) 00:13:44.404 16727.284 - 16852.114: 97.2211% ( 20) 00:13:44.404 16852.114 - 16976.945: 97.4387% ( 22) 00:13:44.404 16976.945 - 17101.775: 97.5969% ( 16) 00:13:44.404 17101.775 - 17226.606: 97.7255% ( 13) 00:13:44.404 17226.606 - 17351.436: 97.8540% ( 13) 00:13:44.404 17351.436 - 17476.267: 97.9925% ( 14) 00:13:44.404 17476.267 - 17601.097: 98.0617% ( 7) 00:13:44.404 17601.097 - 17725.928: 98.1013% ( 4) 00:13:44.404 18350.080 - 18474.910: 98.2002% ( 10) 00:13:44.404 18474.910 - 18599.741: 98.2694% ( 7) 00:13:44.404 18599.741 - 18724.571: 98.3188% ( 5) 00:13:44.404 18724.571 - 18849.402: 98.3584% ( 4) 00:13:44.404 18849.402 - 18974.232: 98.4078% ( 5) 00:13:44.404 18974.232 - 19099.063: 98.4474% ( 4) 00:13:44.404 19099.063 - 19223.893: 98.4869% ( 4) 00:13:44.404 19223.893 - 19348.724: 98.5364% ( 5) 00:13:44.404 19348.724 - 19473.554: 98.5858% ( 5) 00:13:44.404 19473.554 - 19598.385: 98.6254% ( 4) 00:13:44.404 19598.385 - 19723.215: 98.6748% ( 5) 00:13:44.404 19723.215 - 19848.046: 98.7342% ( 6) 00:13:44.404 34453.211 - 34702.872: 98.7737% ( 4) 00:13:44.404 34702.872 - 34952.533: 98.8331% ( 6) 00:13:44.404 34952.533 - 35202.194: 98.8924% ( 6) 00:13:44.404 35202.194 - 35451.855: 98.9320% ( 4) 00:13:44.404 35451.855 - 35701.516: 98.9814% ( 5) 00:13:44.404 35701.516 - 35951.177: 99.0407% ( 6) 00:13:44.404 35951.177 - 36200.838: 99.0803% ( 4) 00:13:44.404 36200.838 - 36450.499: 99.1396% ( 6) 00:13:44.404 36450.499 - 36700.160: 99.1891% ( 5) 00:13:44.404 36700.160 - 36949.821: 99.2385% ( 5) 00:13:44.404 36949.821 - 37199.482: 99.2781% ( 4) 00:13:44.404 37199.482 - 37449.143: 99.3374% ( 6) 00:13:44.404 37449.143 - 37698.804: 99.3671% ( 3) 00:13:44.404 42941.684 - 43191.345: 99.3968% ( 3) 00:13:44.404 43191.345 - 43441.006: 99.4660% ( 7) 00:13:44.404 43441.006 - 43690.667: 99.5154% ( 5) 00:13:44.404 43690.667 - 43940.328: 99.5649% ( 5) 00:13:44.404 43940.328 - 44189.989: 99.6242% ( 6) 00:13:44.404 44189.989 - 44439.650: 99.6737% ( 5) 00:13:44.404 44439.650 - 44689.310: 99.7231% ( 5) 00:13:44.404 44689.310 - 44938.971: 99.7824% ( 6) 00:13:44.404 44938.971 - 45188.632: 99.8319% ( 5) 00:13:44.404 45188.632 - 45438.293: 99.8813% ( 5) 00:13:44.404 45438.293 - 45687.954: 99.9407% ( 6) 00:13:44.404 45687.954 - 45937.615: 99.9901% ( 5) 00:13:44.404 45937.615 - 46187.276: 100.0000% ( 1) 00:13:44.404 00:13:44.404 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:44.404 ============================================================================== 00:13:44.404 Range in us Cumulative IO count 00:13:44.404 9674.362 - 9736.777: 0.0198% ( 2) 00:13:44.404 9736.777 - 9799.192: 0.0791% ( 6) 00:13:44.404 9799.192 - 9861.608: 0.1384% ( 6) 00:13:44.404 9861.608 - 9924.023: 0.1879% ( 5) 00:13:44.404 9924.023 - 9986.438: 0.2571% ( 7) 00:13:44.404 9986.438 - 10048.853: 0.2967% ( 4) 00:13:44.404 10048.853 - 10111.269: 0.4747% ( 18) 00:13:44.404 10111.269 - 10173.684: 0.5934% ( 12) 00:13:44.404 10173.684 - 10236.099: 0.7516% ( 16) 00:13:44.404 10236.099 - 10298.514: 1.0977% ( 35) 00:13:44.404 10298.514 - 10360.930: 1.5724% ( 48) 00:13:44.404 10360.930 - 10423.345: 2.3240% ( 76) 00:13:44.404 10423.345 - 10485.760: 3.3327% ( 102) 00:13:44.404 10485.760 - 10548.175: 4.3809% ( 106) 00:13:44.404 10548.175 - 10610.590: 5.5973% ( 123) 00:13:44.404 10610.590 - 10673.006: 6.8137% ( 123) 00:13:44.404 10673.006 - 10735.421: 7.7433% ( 94) 00:13:44.404 10735.421 - 10797.836: 8.6630% ( 93) 00:13:44.404 10797.836 - 10860.251: 10.0870% ( 144) 00:13:44.404 10860.251 - 10922.667: 11.4320% ( 136) 00:13:44.404 10922.667 - 10985.082: 13.0340% ( 162) 00:13:44.404 10985.082 - 11047.497: 14.9525% ( 194) 00:13:44.404 11047.497 - 11109.912: 17.0293% ( 210) 00:13:44.404 11109.912 - 11172.328: 19.5807% ( 258) 00:13:44.404 11172.328 - 11234.743: 21.9343% ( 238) 00:13:44.404 11234.743 - 11297.158: 24.3374% ( 243) 00:13:44.404 11297.158 - 11359.573: 27.1855% ( 288) 00:13:44.404 11359.573 - 11421.989: 30.1424% ( 299) 00:13:44.404 11421.989 - 11484.404: 33.1388% ( 303) 00:13:44.404 11484.404 - 11546.819: 36.1353% ( 303) 00:13:44.404 11546.819 - 11609.234: 39.2009% ( 310) 00:13:44.404 11609.234 - 11671.650: 41.9007% ( 273) 00:13:44.404 11671.650 - 11734.065: 44.7686% ( 290) 00:13:44.404 11734.065 - 11796.480: 47.2903% ( 255) 00:13:44.404 11796.480 - 11858.895: 49.7824% ( 252) 00:13:44.404 11858.895 - 11921.310: 51.8097% ( 205) 00:13:44.404 11921.310 - 11983.726: 53.8568% ( 207) 00:13:44.404 11983.726 - 12046.141: 55.8643% ( 203) 00:13:44.404 12046.141 - 12108.556: 58.1191% ( 228) 00:13:44.404 12108.556 - 12170.971: 60.1562% ( 206) 00:13:44.404 12170.971 - 12233.387: 62.0649% ( 193) 00:13:44.404 12233.387 - 12295.802: 63.9438% ( 190) 00:13:44.404 12295.802 - 12358.217: 65.7931% ( 187) 00:13:44.404 12358.217 - 12420.632: 67.4644% ( 169) 00:13:44.404 12420.632 - 12483.048: 68.9775% ( 153) 00:13:44.404 12483.048 - 12545.463: 70.5004% ( 154) 00:13:44.404 12545.463 - 12607.878: 71.7366% ( 125) 00:13:44.404 12607.878 - 12670.293: 72.7947% ( 107) 00:13:44.404 12670.293 - 12732.709: 73.7638% ( 98) 00:13:44.404 12732.709 - 12795.124: 74.8319% ( 108) 00:13:44.404 12795.124 - 12857.539: 75.7714% ( 95) 00:13:44.404 12857.539 - 12919.954: 76.5427% ( 78) 00:13:44.404 12919.954 - 12982.370: 77.3339% ( 80) 00:13:44.404 12982.370 - 13044.785: 78.1646% ( 84) 00:13:44.404 13044.785 - 13107.200: 78.8568% ( 70) 00:13:44.405 13107.200 - 13169.615: 79.3710% ( 52) 00:13:44.405 13169.615 - 13232.030: 79.7172% ( 35) 00:13:44.405 13232.030 - 13294.446: 80.0435% ( 33) 00:13:44.405 13294.446 - 13356.861: 80.3797% ( 34) 00:13:44.405 13356.861 - 13419.276: 80.6566% ( 28) 00:13:44.405 13419.276 - 13481.691: 80.8643% ( 21) 00:13:44.405 13481.691 - 13544.107: 81.1412% ( 28) 00:13:44.405 13544.107 - 13606.522: 81.3291% ( 19) 00:13:44.405 13606.522 - 13668.937: 81.5170% ( 19) 00:13:44.405 13668.937 - 13731.352: 81.7840% ( 27) 00:13:44.405 13731.352 - 13793.768: 81.9521% ( 17) 00:13:44.405 13793.768 - 13856.183: 82.1301% ( 18) 00:13:44.405 13856.183 - 13918.598: 82.2983% ( 17) 00:13:44.405 13918.598 - 13981.013: 82.4367% ( 14) 00:13:44.405 13981.013 - 14043.429: 82.5356% ( 10) 00:13:44.405 14043.429 - 14105.844: 82.6444% ( 11) 00:13:44.405 14105.844 - 14168.259: 82.7828% ( 14) 00:13:44.405 14168.259 - 14230.674: 82.9806% ( 20) 00:13:44.405 14230.674 - 14293.090: 83.1191% ( 14) 00:13:44.405 14293.090 - 14355.505: 83.2575% ( 14) 00:13:44.405 14355.505 - 14417.920: 83.4850% ( 23) 00:13:44.405 14417.920 - 14480.335: 83.8706% ( 39) 00:13:44.405 14480.335 - 14542.750: 84.3058% ( 44) 00:13:44.405 14542.750 - 14605.166: 84.9189% ( 62) 00:13:44.405 14605.166 - 14667.581: 85.5222% ( 61) 00:13:44.405 14667.581 - 14729.996: 85.8683% ( 35) 00:13:44.405 14729.996 - 14792.411: 86.2540% ( 39) 00:13:44.405 14792.411 - 14854.827: 86.7385% ( 49) 00:13:44.405 14854.827 - 14917.242: 87.1737% ( 44) 00:13:44.405 14917.242 - 14979.657: 87.5593% ( 39) 00:13:44.405 14979.657 - 15042.072: 88.0538% ( 50) 00:13:44.405 15042.072 - 15104.488: 88.5977% ( 55) 00:13:44.405 15104.488 - 15166.903: 89.2702% ( 68) 00:13:44.405 15166.903 - 15229.318: 89.8833% ( 62) 00:13:44.405 15229.318 - 15291.733: 90.4371% ( 56) 00:13:44.405 15291.733 - 15354.149: 90.9612% ( 53) 00:13:44.405 15354.149 - 15416.564: 91.4557% ( 50) 00:13:44.405 15416.564 - 15478.979: 91.9007% ( 45) 00:13:44.405 15478.979 - 15541.394: 92.3754% ( 48) 00:13:44.405 15541.394 - 15603.810: 92.8995% ( 53) 00:13:44.405 15603.810 - 15666.225: 93.3050% ( 41) 00:13:44.405 15666.225 - 15728.640: 93.7302% ( 43) 00:13:44.405 15728.640 - 15791.055: 94.0368% ( 31) 00:13:44.405 15791.055 - 15853.470: 94.3532% ( 32) 00:13:44.405 15853.470 - 15915.886: 94.5906% ( 24) 00:13:44.405 15915.886 - 15978.301: 94.8180% ( 23) 00:13:44.405 15978.301 - 16103.131: 95.3323% ( 52) 00:13:44.405 16103.131 - 16227.962: 95.8169% ( 49) 00:13:44.405 16227.962 - 16352.792: 96.2025% ( 39) 00:13:44.405 16352.792 - 16477.623: 96.4893% ( 29) 00:13:44.405 16477.623 - 16602.453: 96.7761% ( 29) 00:13:44.405 16602.453 - 16727.284: 96.9739% ( 20) 00:13:44.405 16727.284 - 16852.114: 97.1717% ( 20) 00:13:44.405 16852.114 - 16976.945: 97.3695% ( 20) 00:13:44.405 16976.945 - 17101.775: 97.5376% ( 17) 00:13:44.405 17101.775 - 17226.606: 97.7156% ( 18) 00:13:44.405 17226.606 - 17351.436: 97.8343% ( 12) 00:13:44.405 17351.436 - 17476.267: 97.8837% ( 5) 00:13:44.405 17476.267 - 17601.097: 97.9529% ( 7) 00:13:44.405 17601.097 - 17725.928: 98.0123% ( 6) 00:13:44.405 17725.928 - 17850.758: 98.0419% ( 3) 00:13:44.405 17850.758 - 17975.589: 98.0617% ( 2) 00:13:44.405 17975.589 - 18100.419: 98.0914% ( 3) 00:13:44.405 18100.419 - 18225.250: 98.1013% ( 1) 00:13:44.405 18225.250 - 18350.080: 98.1705% ( 7) 00:13:44.405 18350.080 - 18474.910: 98.2298% ( 6) 00:13:44.405 18474.910 - 18599.741: 98.2892% ( 6) 00:13:44.405 18599.741 - 18724.571: 98.3386% ( 5) 00:13:44.405 18724.571 - 18849.402: 98.3979% ( 6) 00:13:44.405 18849.402 - 18974.232: 98.4573% ( 6) 00:13:44.405 18974.232 - 19099.063: 98.5166% ( 6) 00:13:44.405 19099.063 - 19223.893: 98.5759% ( 6) 00:13:44.405 19223.893 - 19348.724: 98.6353% ( 6) 00:13:44.405 19348.724 - 19473.554: 98.6847% ( 5) 00:13:44.405 19473.554 - 19598.385: 98.7342% ( 5) 00:13:44.405 31956.602 - 32206.263: 98.8133% ( 8) 00:13:44.405 32206.263 - 32455.924: 98.9419% ( 13) 00:13:44.405 32455.924 - 32705.585: 98.9715% ( 3) 00:13:44.405 32705.585 - 32955.246: 99.0210% ( 5) 00:13:44.405 32955.246 - 33204.907: 99.0704% ( 5) 00:13:44.405 33204.907 - 33454.568: 99.1297% ( 6) 00:13:44.405 33454.568 - 33704.229: 99.1891% ( 6) 00:13:44.405 33704.229 - 33953.890: 99.2385% ( 5) 00:13:44.405 33953.890 - 34203.550: 99.2979% ( 6) 00:13:44.405 34203.550 - 34453.211: 99.3473% ( 5) 00:13:44.405 34453.211 - 34702.872: 99.3671% ( 2) 00:13:44.405 39696.091 - 39945.752: 99.3968% ( 3) 00:13:44.405 39945.752 - 40195.413: 99.4561% ( 6) 00:13:44.405 40195.413 - 40445.074: 99.5154% ( 6) 00:13:44.405 40445.074 - 40694.735: 99.5748% ( 6) 00:13:44.405 40694.735 - 40944.396: 99.6341% ( 6) 00:13:44.405 40944.396 - 41194.057: 99.6934% ( 6) 00:13:44.405 41194.057 - 41443.718: 99.7627% ( 7) 00:13:44.405 41443.718 - 41693.379: 99.8220% ( 6) 00:13:44.405 41693.379 - 41943.040: 99.8813% ( 6) 00:13:44.405 41943.040 - 42192.701: 99.9407% ( 6) 00:13:44.405 42192.701 - 42442.362: 100.0000% ( 6) 00:13:44.405 00:13:44.405 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:44.405 ============================================================================== 00:13:44.405 Range in us Cumulative IO count 00:13:44.405 9986.438 - 10048.853: 0.0593% ( 6) 00:13:44.405 10048.853 - 10111.269: 0.1187% ( 6) 00:13:44.405 10111.269 - 10173.684: 0.2275% ( 11) 00:13:44.405 10173.684 - 10236.099: 0.5241% ( 30) 00:13:44.405 10236.099 - 10298.514: 1.0186% ( 50) 00:13:44.405 10298.514 - 10360.930: 1.7702% ( 76) 00:13:44.405 10360.930 - 10423.345: 2.5811% ( 82) 00:13:44.405 10423.345 - 10485.760: 3.5403% ( 97) 00:13:44.405 10485.760 - 10548.175: 4.5589% ( 103) 00:13:44.405 10548.175 - 10610.590: 5.6566% ( 111) 00:13:44.405 10610.590 - 10673.006: 6.5368% ( 89) 00:13:44.405 10673.006 - 10735.421: 7.3576% ( 83) 00:13:44.405 10735.421 - 10797.836: 8.4850% ( 114) 00:13:44.405 10797.836 - 10860.251: 9.5530% ( 108) 00:13:44.405 10860.251 - 10922.667: 11.1353% ( 160) 00:13:44.405 10922.667 - 10985.082: 13.1131% ( 200) 00:13:44.405 10985.082 - 11047.497: 15.2294% ( 214) 00:13:44.405 11047.497 - 11109.912: 17.5930% ( 239) 00:13:44.405 11109.912 - 11172.328: 20.1642% ( 260) 00:13:44.405 11172.328 - 11234.743: 22.6068% ( 247) 00:13:44.405 11234.743 - 11297.158: 25.2176% ( 264) 00:13:44.405 11297.158 - 11359.573: 27.9371% ( 275) 00:13:44.405 11359.573 - 11421.989: 30.6962% ( 279) 00:13:44.405 11421.989 - 11484.404: 33.2674% ( 260) 00:13:44.405 11484.404 - 11546.819: 35.9968% ( 276) 00:13:44.405 11546.819 - 11609.234: 39.0427% ( 308) 00:13:44.405 11609.234 - 11671.650: 42.0688% ( 306) 00:13:44.405 11671.650 - 11734.065: 44.5312% ( 249) 00:13:44.405 11734.065 - 11796.480: 47.2805% ( 278) 00:13:44.405 11796.480 - 11858.895: 49.7923% ( 254) 00:13:44.405 11858.895 - 11921.310: 52.2449% ( 248) 00:13:44.405 11921.310 - 11983.726: 54.2820% ( 206) 00:13:44.405 11983.726 - 12046.141: 56.4379% ( 218) 00:13:44.405 12046.141 - 12108.556: 58.2674% ( 185) 00:13:44.405 12108.556 - 12170.971: 59.9881% ( 174) 00:13:44.405 12170.971 - 12233.387: 61.8374% ( 187) 00:13:44.405 12233.387 - 12295.802: 63.7065% ( 189) 00:13:44.405 12295.802 - 12358.217: 65.5558% ( 187) 00:13:44.405 12358.217 - 12420.632: 67.1974% ( 166) 00:13:44.405 12420.632 - 12483.048: 68.7006% ( 152) 00:13:44.405 12483.048 - 12545.463: 70.0356% ( 135) 00:13:44.405 12545.463 - 12607.878: 71.2520% ( 123) 00:13:44.405 12607.878 - 12670.293: 72.2706% ( 103) 00:13:44.405 12670.293 - 12732.709: 73.1804% ( 92) 00:13:44.405 12732.709 - 12795.124: 74.1891% ( 102) 00:13:44.405 12795.124 - 12857.539: 75.2176% ( 104) 00:13:44.405 12857.539 - 12919.954: 76.1669% ( 96) 00:13:44.405 12919.954 - 12982.370: 77.0372% ( 88) 00:13:44.405 12982.370 - 13044.785: 77.8679% ( 84) 00:13:44.405 13044.785 - 13107.200: 78.5107% ( 65) 00:13:44.405 13107.200 - 13169.615: 79.0744% ( 57) 00:13:44.405 13169.615 - 13232.030: 79.5392% ( 47) 00:13:44.405 13232.030 - 13294.446: 79.9051% ( 37) 00:13:44.405 13294.446 - 13356.861: 80.2116% ( 31) 00:13:44.405 13356.861 - 13419.276: 80.4292% ( 22) 00:13:44.405 13419.276 - 13481.691: 80.7061% ( 28) 00:13:44.405 13481.691 - 13544.107: 80.9335% ( 23) 00:13:44.405 13544.107 - 13606.522: 81.1907% ( 26) 00:13:44.405 13606.522 - 13668.937: 81.4873% ( 30) 00:13:44.405 13668.937 - 13731.352: 81.7346% ( 25) 00:13:44.405 13731.352 - 13793.768: 81.9027% ( 17) 00:13:44.405 13793.768 - 13856.183: 82.0016% ( 10) 00:13:44.405 13856.183 - 13918.598: 82.1203% ( 12) 00:13:44.405 13918.598 - 13981.013: 82.2884% ( 17) 00:13:44.405 13981.013 - 14043.429: 82.5653% ( 28) 00:13:44.405 14043.429 - 14105.844: 82.7729% ( 21) 00:13:44.405 14105.844 - 14168.259: 83.0202% ( 25) 00:13:44.405 14168.259 - 14230.674: 83.1982% ( 18) 00:13:44.405 14230.674 - 14293.090: 83.4157% ( 22) 00:13:44.405 14293.090 - 14355.505: 83.6926% ( 28) 00:13:44.405 14355.505 - 14417.920: 84.1278% ( 44) 00:13:44.405 14417.920 - 14480.335: 84.5233% ( 40) 00:13:44.405 14480.335 - 14542.750: 84.8398% ( 32) 00:13:44.405 14542.750 - 14605.166: 85.2551% ( 42) 00:13:44.405 14605.166 - 14667.581: 85.6705% ( 42) 00:13:44.405 14667.581 - 14729.996: 86.0759% ( 41) 00:13:44.406 14729.996 - 14792.411: 86.5111% ( 44) 00:13:44.406 14792.411 - 14854.827: 86.9462% ( 44) 00:13:44.406 14854.827 - 14917.242: 87.4901% ( 55) 00:13:44.406 14917.242 - 14979.657: 88.0241% ( 54) 00:13:44.406 14979.657 - 15042.072: 88.6076% ( 59) 00:13:44.406 15042.072 - 15104.488: 89.2207% ( 62) 00:13:44.406 15104.488 - 15166.903: 89.7350% ( 52) 00:13:44.406 15166.903 - 15229.318: 90.2294% ( 50) 00:13:44.406 15229.318 - 15291.733: 90.7140% ( 49) 00:13:44.406 15291.733 - 15354.149: 91.1788% ( 47) 00:13:44.406 15354.149 - 15416.564: 91.6634% ( 49) 00:13:44.406 15416.564 - 15478.979: 92.1084% ( 45) 00:13:44.406 15478.979 - 15541.394: 92.6028% ( 50) 00:13:44.406 15541.394 - 15603.810: 93.0874% ( 49) 00:13:44.406 15603.810 - 15666.225: 93.4434% ( 36) 00:13:44.406 15666.225 - 15728.640: 93.7401% ( 30) 00:13:44.406 15728.640 - 15791.055: 94.0071% ( 27) 00:13:44.406 15791.055 - 15853.470: 94.2247% ( 22) 00:13:44.406 15853.470 - 15915.886: 94.4719% ( 25) 00:13:44.406 15915.886 - 15978.301: 94.7290% ( 26) 00:13:44.406 15978.301 - 16103.131: 95.1938% ( 47) 00:13:44.406 16103.131 - 16227.962: 95.5202% ( 33) 00:13:44.406 16227.962 - 16352.792: 95.7674% ( 25) 00:13:44.406 16352.792 - 16477.623: 95.9652% ( 20) 00:13:44.406 16477.623 - 16602.453: 96.2816% ( 32) 00:13:44.406 16602.453 - 16727.284: 96.6377% ( 36) 00:13:44.406 16727.284 - 16852.114: 96.8948% ( 26) 00:13:44.406 16852.114 - 16976.945: 97.1222% ( 23) 00:13:44.406 16976.945 - 17101.775: 97.3299% ( 21) 00:13:44.406 17101.775 - 17226.606: 97.5574% ( 23) 00:13:44.406 17226.606 - 17351.436: 97.8046% ( 25) 00:13:44.406 17351.436 - 17476.267: 97.9529% ( 15) 00:13:44.406 17476.267 - 17601.097: 98.0419% ( 9) 00:13:44.406 17601.097 - 17725.928: 98.0815% ( 4) 00:13:44.406 17725.928 - 17850.758: 98.1013% ( 2) 00:13:44.406 17975.589 - 18100.419: 98.1408% ( 4) 00:13:44.406 18100.419 - 18225.250: 98.2002% ( 6) 00:13:44.406 18225.250 - 18350.080: 98.2694% ( 7) 00:13:44.406 18350.080 - 18474.910: 98.3287% ( 6) 00:13:44.406 18474.910 - 18599.741: 98.3979% ( 7) 00:13:44.406 18599.741 - 18724.571: 98.4672% ( 7) 00:13:44.406 18724.571 - 18849.402: 98.5265% ( 6) 00:13:44.406 18849.402 - 18974.232: 98.5858% ( 6) 00:13:44.406 18974.232 - 19099.063: 98.6551% ( 7) 00:13:44.406 19099.063 - 19223.893: 98.7144% ( 6) 00:13:44.406 19223.893 - 19348.724: 98.7342% ( 2) 00:13:44.406 29335.162 - 29459.992: 98.7737% ( 4) 00:13:44.406 29459.992 - 29584.823: 98.8034% ( 3) 00:13:44.406 29584.823 - 29709.653: 98.8232% ( 2) 00:13:44.406 29709.653 - 29834.484: 98.8528% ( 3) 00:13:44.406 29834.484 - 29959.314: 98.8726% ( 2) 00:13:44.406 29959.314 - 30084.145: 98.9023% ( 3) 00:13:44.406 30084.145 - 30208.975: 98.9320% ( 3) 00:13:44.406 30208.975 - 30333.806: 98.9616% ( 3) 00:13:44.406 30333.806 - 30458.636: 98.9913% ( 3) 00:13:44.406 30458.636 - 30583.467: 99.0210% ( 3) 00:13:44.406 30583.467 - 30708.297: 99.0407% ( 2) 00:13:44.406 30708.297 - 30833.128: 99.0704% ( 3) 00:13:44.406 30833.128 - 30957.958: 99.1001% ( 3) 00:13:44.406 30957.958 - 31082.789: 99.1297% ( 3) 00:13:44.406 31082.789 - 31207.619: 99.1594% ( 3) 00:13:44.406 31207.619 - 31332.450: 99.1891% ( 3) 00:13:44.406 31332.450 - 31457.280: 99.2188% ( 3) 00:13:44.406 31457.280 - 31582.110: 99.2484% ( 3) 00:13:44.406 31582.110 - 31706.941: 99.2682% ( 2) 00:13:44.406 31706.941 - 31831.771: 99.2979% ( 3) 00:13:44.406 31831.771 - 31956.602: 99.3275% ( 3) 00:13:44.406 31956.602 - 32206.263: 99.3671% ( 4) 00:13:44.406 37199.482 - 37449.143: 99.3770% ( 1) 00:13:44.406 37449.143 - 37698.804: 99.4363% ( 6) 00:13:44.406 37698.804 - 37948.465: 99.4956% ( 6) 00:13:44.406 37948.465 - 38198.126: 99.5451% ( 5) 00:13:44.406 38198.126 - 38447.787: 99.6143% ( 7) 00:13:44.406 38447.787 - 38697.448: 99.6737% ( 6) 00:13:44.406 38697.448 - 38947.109: 99.7330% ( 6) 00:13:44.406 38947.109 - 39196.770: 99.7923% ( 6) 00:13:44.406 39196.770 - 39446.430: 99.8517% ( 6) 00:13:44.406 39446.430 - 39696.091: 99.9110% ( 6) 00:13:44.406 39696.091 - 39945.752: 99.9703% ( 6) 00:13:44.406 39945.752 - 40195.413: 100.0000% ( 3) 00:13:44.406 00:13:44.406 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:44.406 ============================================================================== 00:13:44.406 Range in us Cumulative IO count 00:13:44.406 9799.192 - 9861.608: 0.0396% ( 4) 00:13:44.406 9861.608 - 9924.023: 0.1088% ( 7) 00:13:44.406 9924.023 - 9986.438: 0.1681% ( 6) 00:13:44.406 9986.438 - 10048.853: 0.2373% ( 7) 00:13:44.406 10048.853 - 10111.269: 0.3659% ( 13) 00:13:44.406 10111.269 - 10173.684: 0.5538% ( 19) 00:13:44.406 10173.684 - 10236.099: 0.8406% ( 29) 00:13:44.406 10236.099 - 10298.514: 1.3054% ( 47) 00:13:44.406 10298.514 - 10360.930: 1.8790% ( 58) 00:13:44.406 10360.930 - 10423.345: 2.6602% ( 79) 00:13:44.406 10423.345 - 10485.760: 3.3623% ( 71) 00:13:44.406 10485.760 - 10548.175: 4.1930% ( 84) 00:13:44.406 10548.175 - 10610.590: 5.1523% ( 97) 00:13:44.406 10610.590 - 10673.006: 6.1511% ( 101) 00:13:44.406 10673.006 - 10735.421: 7.5059% ( 137) 00:13:44.406 10735.421 - 10797.836: 8.8509% ( 136) 00:13:44.406 10797.836 - 10860.251: 10.2947% ( 146) 00:13:44.406 10860.251 - 10922.667: 11.7089% ( 143) 00:13:44.406 10922.667 - 10985.082: 13.3406% ( 165) 00:13:44.406 10985.082 - 11047.497: 15.2492% ( 193) 00:13:44.406 11047.497 - 11109.912: 17.5930% ( 237) 00:13:44.406 11109.912 - 11172.328: 20.1741% ( 261) 00:13:44.406 11172.328 - 11234.743: 23.1903% ( 305) 00:13:44.406 11234.743 - 11297.158: 25.5835% ( 242) 00:13:44.406 11297.158 - 11359.573: 28.1942% ( 264) 00:13:44.406 11359.573 - 11421.989: 30.8544% ( 269) 00:13:44.406 11421.989 - 11484.404: 33.4949% ( 267) 00:13:44.406 11484.404 - 11546.819: 36.4221% ( 296) 00:13:44.406 11546.819 - 11609.234: 39.1713% ( 278) 00:13:44.406 11609.234 - 11671.650: 42.0095% ( 287) 00:13:44.406 11671.650 - 11734.065: 44.5411% ( 256) 00:13:44.406 11734.065 - 11796.480: 47.0926% ( 258) 00:13:44.406 11796.480 - 11858.895: 49.3572% ( 229) 00:13:44.406 11858.895 - 11921.310: 51.5032% ( 217) 00:13:44.406 11921.310 - 11983.726: 53.6986% ( 222) 00:13:44.406 11983.726 - 12046.141: 55.6665% ( 199) 00:13:44.406 12046.141 - 12108.556: 57.4268% ( 178) 00:13:44.406 12108.556 - 12170.971: 59.4146% ( 201) 00:13:44.406 12170.971 - 12233.387: 61.5210% ( 213) 00:13:44.406 12233.387 - 12295.802: 63.3703% ( 187) 00:13:44.406 12295.802 - 12358.217: 65.3184% ( 197) 00:13:44.406 12358.217 - 12420.632: 67.0886% ( 179) 00:13:44.406 12420.632 - 12483.048: 68.6610% ( 159) 00:13:44.406 12483.048 - 12545.463: 70.0356% ( 139) 00:13:44.406 12545.463 - 12607.878: 71.3904% ( 137) 00:13:44.406 12607.878 - 12670.293: 72.5475% ( 117) 00:13:44.406 12670.293 - 12732.709: 73.5661% ( 103) 00:13:44.406 12732.709 - 12795.124: 74.4363% ( 88) 00:13:44.406 12795.124 - 12857.539: 75.3263% ( 90) 00:13:44.406 12857.539 - 12919.954: 76.1076% ( 79) 00:13:44.406 12919.954 - 12982.370: 76.8196% ( 72) 00:13:44.406 12982.370 - 13044.785: 77.4525% ( 64) 00:13:44.406 13044.785 - 13107.200: 78.1250% ( 68) 00:13:44.406 13107.200 - 13169.615: 78.7381% ( 62) 00:13:44.406 13169.615 - 13232.030: 79.2227% ( 49) 00:13:44.406 13232.030 - 13294.446: 79.6381% ( 42) 00:13:44.406 13294.446 - 13356.861: 79.9051% ( 27) 00:13:44.406 13356.861 - 13419.276: 80.2215% ( 32) 00:13:44.406 13419.276 - 13481.691: 80.4885% ( 27) 00:13:44.406 13481.691 - 13544.107: 80.7259% ( 24) 00:13:44.406 13544.107 - 13606.522: 80.9830% ( 26) 00:13:44.406 13606.522 - 13668.937: 81.1808% ( 20) 00:13:44.406 13668.937 - 13731.352: 81.3489% ( 17) 00:13:44.406 13731.352 - 13793.768: 81.5368% ( 19) 00:13:44.406 13793.768 - 13856.183: 81.8631% ( 33) 00:13:44.406 13856.183 - 13918.598: 82.1301% ( 27) 00:13:44.406 13918.598 - 13981.013: 82.3477% ( 22) 00:13:44.406 13981.013 - 14043.429: 82.5752% ( 23) 00:13:44.406 14043.429 - 14105.844: 82.8422% ( 27) 00:13:44.406 14105.844 - 14168.259: 83.1388% ( 30) 00:13:44.406 14168.259 - 14230.674: 83.3861% ( 25) 00:13:44.406 14230.674 - 14293.090: 83.6333% ( 25) 00:13:44.406 14293.090 - 14355.505: 83.8509% ( 22) 00:13:44.406 14355.505 - 14417.920: 84.1179% ( 27) 00:13:44.406 14417.920 - 14480.335: 84.4541% ( 34) 00:13:44.406 14480.335 - 14542.750: 84.8497% ( 40) 00:13:44.406 14542.750 - 14605.166: 85.2255% ( 38) 00:13:44.406 14605.166 - 14667.581: 85.5419% ( 32) 00:13:44.406 14667.581 - 14729.996: 85.9276% ( 39) 00:13:44.406 14729.996 - 14792.411: 86.3232% ( 40) 00:13:44.406 14792.411 - 14854.827: 86.7682% ( 45) 00:13:44.406 14854.827 - 14917.242: 87.2330% ( 47) 00:13:44.406 14917.242 - 14979.657: 87.8560% ( 63) 00:13:44.406 14979.657 - 15042.072: 88.5581% ( 71) 00:13:44.406 15042.072 - 15104.488: 89.2405% ( 69) 00:13:44.406 15104.488 - 15166.903: 89.8141% ( 58) 00:13:44.406 15166.903 - 15229.318: 90.3184% ( 51) 00:13:44.406 15229.318 - 15291.733: 90.8327% ( 52) 00:13:44.406 15291.733 - 15354.149: 91.3370% ( 51) 00:13:44.406 15354.149 - 15416.564: 91.8414% ( 51) 00:13:44.406 15416.564 - 15478.979: 92.3358% ( 50) 00:13:44.406 15478.979 - 15541.394: 92.8006% ( 47) 00:13:44.406 15541.394 - 15603.810: 93.2358% ( 44) 00:13:44.406 15603.810 - 15666.225: 93.6116% ( 38) 00:13:44.406 15666.225 - 15728.640: 93.9577% ( 35) 00:13:44.406 15728.640 - 15791.055: 94.2741% ( 32) 00:13:44.406 15791.055 - 15853.470: 94.5312% ( 26) 00:13:44.406 15853.470 - 15915.886: 94.7686% ( 24) 00:13:44.406 15915.886 - 15978.301: 94.9367% ( 17) 00:13:44.406 15978.301 - 16103.131: 95.3026% ( 37) 00:13:44.406 16103.131 - 16227.962: 95.5498% ( 25) 00:13:44.406 16227.962 - 16352.792: 95.7180% ( 17) 00:13:44.406 16352.792 - 16477.623: 95.8564% ( 14) 00:13:44.406 16477.623 - 16602.453: 96.0146% ( 16) 00:13:44.406 16602.453 - 16727.284: 96.1828% ( 17) 00:13:44.406 16727.284 - 16852.114: 96.3410% ( 16) 00:13:44.407 16852.114 - 16976.945: 96.4498% ( 11) 00:13:44.407 16976.945 - 17101.775: 96.5981% ( 15) 00:13:44.407 17101.775 - 17226.606: 96.7464% ( 15) 00:13:44.407 17226.606 - 17351.436: 96.9047% ( 16) 00:13:44.407 17351.436 - 17476.267: 97.1025% ( 20) 00:13:44.407 17476.267 - 17601.097: 97.3200% ( 22) 00:13:44.407 17601.097 - 17725.928: 97.5969% ( 28) 00:13:44.407 17725.928 - 17850.758: 97.7749% ( 18) 00:13:44.407 17850.758 - 17975.589: 98.0419% ( 27) 00:13:44.407 17975.589 - 18100.419: 98.2002% ( 16) 00:13:44.407 18100.419 - 18225.250: 98.3188% ( 12) 00:13:44.407 18225.250 - 18350.080: 98.3979% ( 8) 00:13:44.407 18350.080 - 18474.910: 98.4869% ( 9) 00:13:44.407 18474.910 - 18599.741: 98.5858% ( 10) 00:13:44.407 18599.741 - 18724.571: 98.6551% ( 7) 00:13:44.407 18724.571 - 18849.402: 98.7243% ( 7) 00:13:44.407 18849.402 - 18974.232: 98.7342% ( 1) 00:13:44.407 25964.739 - 26089.570: 98.7441% ( 1) 00:13:44.407 26089.570 - 26214.400: 98.7737% ( 3) 00:13:44.407 26214.400 - 26339.230: 98.8034% ( 3) 00:13:44.407 26339.230 - 26464.061: 98.8331% ( 3) 00:13:44.407 26464.061 - 26588.891: 98.8627% ( 3) 00:13:44.407 26588.891 - 26713.722: 98.8825% ( 2) 00:13:44.407 26713.722 - 26838.552: 98.9023% ( 2) 00:13:44.407 26838.552 - 26963.383: 98.9320% ( 3) 00:13:44.407 26963.383 - 27088.213: 98.9616% ( 3) 00:13:44.407 27088.213 - 27213.044: 98.9913% ( 3) 00:13:44.407 27213.044 - 27337.874: 99.0210% ( 3) 00:13:44.407 27337.874 - 27462.705: 99.0407% ( 2) 00:13:44.407 27462.705 - 27587.535: 99.0704% ( 3) 00:13:44.407 27587.535 - 27712.366: 99.1001% ( 3) 00:13:44.407 27712.366 - 27837.196: 99.1297% ( 3) 00:13:44.407 27837.196 - 27962.027: 99.1594% ( 3) 00:13:44.407 27962.027 - 28086.857: 99.1891% ( 3) 00:13:44.407 28086.857 - 28211.688: 99.2188% ( 3) 00:13:44.407 28211.688 - 28336.518: 99.2583% ( 4) 00:13:44.407 28336.518 - 28461.349: 99.2880% ( 3) 00:13:44.407 28461.349 - 28586.179: 99.3176% ( 3) 00:13:44.407 28586.179 - 28711.010: 99.3374% ( 2) 00:13:44.407 28711.010 - 28835.840: 99.3671% ( 3) 00:13:44.407 33953.890 - 34203.550: 99.3770% ( 1) 00:13:44.407 34203.550 - 34453.211: 99.4363% ( 6) 00:13:44.407 34453.211 - 34702.872: 99.4858% ( 5) 00:13:44.407 34702.872 - 34952.533: 99.5352% ( 5) 00:13:44.407 34952.533 - 35202.194: 99.5945% ( 6) 00:13:44.407 35202.194 - 35451.855: 99.6539% ( 6) 00:13:44.407 35451.855 - 35701.516: 99.7132% ( 6) 00:13:44.407 35701.516 - 35951.177: 99.7725% ( 6) 00:13:44.407 35951.177 - 36200.838: 99.8319% ( 6) 00:13:44.407 36200.838 - 36450.499: 99.8912% ( 6) 00:13:44.407 36450.499 - 36700.160: 99.9506% ( 6) 00:13:44.407 36700.160 - 36949.821: 100.0000% ( 5) 00:13:44.407 00:13:44.407 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:44.407 ============================================================================== 00:13:44.407 Range in us Cumulative IO count 00:13:44.407 9861.608 - 9924.023: 0.0198% ( 2) 00:13:44.407 9924.023 - 9986.438: 0.0791% ( 6) 00:13:44.407 9986.438 - 10048.853: 0.1384% ( 6) 00:13:44.407 10048.853 - 10111.269: 0.2077% ( 7) 00:13:44.407 10111.269 - 10173.684: 0.3857% ( 18) 00:13:44.407 10173.684 - 10236.099: 0.7120% ( 33) 00:13:44.407 10236.099 - 10298.514: 1.0186% ( 31) 00:13:44.407 10298.514 - 10360.930: 1.4834% ( 47) 00:13:44.407 10360.930 - 10423.345: 2.2745% ( 80) 00:13:44.407 10423.345 - 10485.760: 3.1349% ( 87) 00:13:44.407 10485.760 - 10548.175: 4.2029% ( 108) 00:13:44.407 10548.175 - 10610.590: 5.3006% ( 111) 00:13:44.407 10610.590 - 10673.006: 6.6653% ( 138) 00:13:44.407 10673.006 - 10735.421: 7.7532% ( 110) 00:13:44.407 10735.421 - 10797.836: 8.8509% ( 111) 00:13:44.407 10797.836 - 10860.251: 10.0870% ( 125) 00:13:44.407 10860.251 - 10922.667: 11.5309% ( 146) 00:13:44.407 10922.667 - 10985.082: 13.1824% ( 167) 00:13:44.407 10985.082 - 11047.497: 15.0514% ( 189) 00:13:44.407 11047.497 - 11109.912: 17.3161% ( 229) 00:13:44.407 11109.912 - 11172.328: 20.3026% ( 302) 00:13:44.407 11172.328 - 11234.743: 22.7354% ( 246) 00:13:44.407 11234.743 - 11297.158: 25.1780% ( 247) 00:13:44.407 11297.158 - 11359.573: 27.8778% ( 273) 00:13:44.407 11359.573 - 11421.989: 30.9731% ( 313) 00:13:44.407 11421.989 - 11484.404: 34.0487% ( 311) 00:13:44.407 11484.404 - 11546.819: 36.5210% ( 250) 00:13:44.407 11546.819 - 11609.234: 39.1515% ( 266) 00:13:44.407 11609.234 - 11671.650: 41.5744% ( 245) 00:13:44.407 11671.650 - 11734.065: 44.2445% ( 270) 00:13:44.407 11734.065 - 11796.480: 46.7464% ( 253) 00:13:44.407 11796.480 - 11858.895: 49.0407% ( 232) 00:13:44.407 11858.895 - 11921.310: 51.3647% ( 235) 00:13:44.407 11921.310 - 11983.726: 53.5997% ( 226) 00:13:44.407 11983.726 - 12046.141: 55.8742% ( 230) 00:13:44.407 12046.141 - 12108.556: 57.7828% ( 193) 00:13:44.407 12108.556 - 12170.971: 59.7013% ( 194) 00:13:44.407 12170.971 - 12233.387: 61.5902% ( 191) 00:13:44.407 12233.387 - 12295.802: 63.5186% ( 195) 00:13:44.407 12295.802 - 12358.217: 65.2294% ( 173) 00:13:44.407 12358.217 - 12420.632: 66.8513% ( 164) 00:13:44.407 12420.632 - 12483.048: 68.3841% ( 155) 00:13:44.407 12483.048 - 12545.463: 69.8972% ( 153) 00:13:44.407 12545.463 - 12607.878: 71.2816% ( 140) 00:13:44.407 12607.878 - 12670.293: 72.5079% ( 124) 00:13:44.407 12670.293 - 12732.709: 73.6748% ( 118) 00:13:44.407 12732.709 - 12795.124: 74.6737% ( 101) 00:13:44.407 12795.124 - 12857.539: 75.4648% ( 80) 00:13:44.407 12857.539 - 12919.954: 76.1966% ( 74) 00:13:44.407 12919.954 - 12982.370: 76.9680% ( 78) 00:13:44.407 12982.370 - 13044.785: 77.6404% ( 68) 00:13:44.407 13044.785 - 13107.200: 78.3228% ( 69) 00:13:44.407 13107.200 - 13169.615: 78.9161% ( 60) 00:13:44.407 13169.615 - 13232.030: 79.4502% ( 54) 00:13:44.407 13232.030 - 13294.446: 79.9051% ( 46) 00:13:44.407 13294.446 - 13356.861: 80.2116% ( 31) 00:13:44.407 13356.861 - 13419.276: 80.4589% ( 25) 00:13:44.407 13419.276 - 13481.691: 80.6764% ( 22) 00:13:44.407 13481.691 - 13544.107: 80.9039% ( 23) 00:13:44.407 13544.107 - 13606.522: 81.0720% ( 17) 00:13:44.407 13606.522 - 13668.937: 81.2401% ( 17) 00:13:44.407 13668.937 - 13731.352: 81.4676% ( 23) 00:13:44.407 13731.352 - 13793.768: 81.7346% ( 27) 00:13:44.407 13793.768 - 13856.183: 81.8532% ( 12) 00:13:44.407 13856.183 - 13918.598: 82.0312% ( 18) 00:13:44.407 13918.598 - 13981.013: 82.1895% ( 16) 00:13:44.407 13981.013 - 14043.429: 82.3774% ( 19) 00:13:44.407 14043.429 - 14105.844: 82.5752% ( 20) 00:13:44.407 14105.844 - 14168.259: 82.8422% ( 27) 00:13:44.407 14168.259 - 14230.674: 83.1685% ( 33) 00:13:44.407 14230.674 - 14293.090: 83.4157% ( 25) 00:13:44.407 14293.090 - 14355.505: 83.6333% ( 22) 00:13:44.407 14355.505 - 14417.920: 83.8113% ( 18) 00:13:44.407 14417.920 - 14480.335: 84.0882% ( 28) 00:13:44.407 14480.335 - 14542.750: 84.4442% ( 36) 00:13:44.407 14542.750 - 14605.166: 84.8497% ( 41) 00:13:44.407 14605.166 - 14667.581: 85.2354% ( 39) 00:13:44.407 14667.581 - 14729.996: 85.5815% ( 35) 00:13:44.407 14729.996 - 14792.411: 86.1056% ( 53) 00:13:44.407 14792.411 - 14854.827: 86.5803% ( 48) 00:13:44.407 14854.827 - 14917.242: 87.1341% ( 56) 00:13:44.407 14917.242 - 14979.657: 87.8066% ( 68) 00:13:44.407 14979.657 - 15042.072: 88.4790% ( 68) 00:13:44.407 15042.072 - 15104.488: 89.0625% ( 59) 00:13:44.407 15104.488 - 15166.903: 89.5669% ( 51) 00:13:44.407 15166.903 - 15229.318: 90.1305% ( 57) 00:13:44.407 15229.318 - 15291.733: 90.6349% ( 51) 00:13:44.407 15291.733 - 15354.149: 91.1887% ( 56) 00:13:44.407 15354.149 - 15416.564: 91.7029% ( 52) 00:13:44.407 15416.564 - 15478.979: 92.1974% ( 50) 00:13:44.407 15478.979 - 15541.394: 92.7017% ( 51) 00:13:44.407 15541.394 - 15603.810: 93.1566% ( 46) 00:13:44.407 15603.810 - 15666.225: 93.5324% ( 38) 00:13:44.407 15666.225 - 15728.640: 93.8489% ( 32) 00:13:44.407 15728.640 - 15791.055: 94.1653% ( 32) 00:13:44.408 15791.055 - 15853.470: 94.4818% ( 32) 00:13:44.408 15853.470 - 15915.886: 94.6598% ( 18) 00:13:44.408 15915.886 - 15978.301: 94.8378% ( 18) 00:13:44.408 15978.301 - 16103.131: 95.2235% ( 39) 00:13:44.408 16103.131 - 16227.962: 95.4707% ( 25) 00:13:44.408 16227.962 - 16352.792: 95.6487% ( 18) 00:13:44.408 16352.792 - 16477.623: 95.8366% ( 19) 00:13:44.408 16477.623 - 16602.453: 96.0146% ( 18) 00:13:44.408 16602.453 - 16727.284: 96.1531% ( 14) 00:13:44.408 16727.284 - 16852.114: 96.2816% ( 13) 00:13:44.408 16852.114 - 16976.945: 96.3706% ( 9) 00:13:44.408 16976.945 - 17101.775: 96.4992% ( 13) 00:13:44.408 17101.775 - 17226.606: 96.6475% ( 15) 00:13:44.408 17226.606 - 17351.436: 96.8157% ( 17) 00:13:44.408 17351.436 - 17476.267: 96.9343% ( 12) 00:13:44.408 17476.267 - 17601.097: 97.0827% ( 15) 00:13:44.408 17601.097 - 17725.928: 97.2903% ( 21) 00:13:44.408 17725.928 - 17850.758: 97.5178% ( 23) 00:13:44.408 17850.758 - 17975.589: 97.7453% ( 23) 00:13:44.408 17975.589 - 18100.419: 97.8936% ( 15) 00:13:44.408 18100.419 - 18225.250: 98.0419% ( 15) 00:13:44.408 18225.250 - 18350.080: 98.2199% ( 18) 00:13:44.408 18350.080 - 18474.910: 98.3188% ( 10) 00:13:44.408 18474.910 - 18599.741: 98.4276% ( 11) 00:13:44.408 18599.741 - 18724.571: 98.5265% ( 10) 00:13:44.408 18724.571 - 18849.402: 98.6155% ( 9) 00:13:44.408 18849.402 - 18974.232: 98.6353% ( 2) 00:13:44.408 18974.232 - 19099.063: 98.6551% ( 2) 00:13:44.408 19099.063 - 19223.893: 98.6946% ( 4) 00:13:44.408 19223.893 - 19348.724: 98.7342% ( 4) 00:13:44.408 23592.960 - 23717.790: 98.7441% ( 1) 00:13:44.408 23717.790 - 23842.621: 98.8627% ( 12) 00:13:44.408 23842.621 - 23967.451: 98.9913% ( 13) 00:13:44.408 23967.451 - 24092.282: 99.0605% ( 7) 00:13:44.408 24092.282 - 24217.112: 99.0803% ( 2) 00:13:44.408 24217.112 - 24341.943: 99.1001% ( 2) 00:13:44.408 24341.943 - 24466.773: 99.1199% ( 2) 00:13:44.408 24466.773 - 24591.604: 99.1495% ( 3) 00:13:44.408 24591.604 - 24716.434: 99.1693% ( 2) 00:13:44.408 24716.434 - 24841.265: 99.1891% ( 2) 00:13:44.408 24841.265 - 24966.095: 99.2089% ( 2) 00:13:44.408 24966.095 - 25090.926: 99.2385% ( 3) 00:13:44.408 25090.926 - 25215.756: 99.2583% ( 2) 00:13:44.408 25215.756 - 25340.587: 99.2880% ( 3) 00:13:44.408 25340.587 - 25465.417: 99.3078% ( 2) 00:13:44.408 25465.417 - 25590.248: 99.3374% ( 3) 00:13:44.408 25590.248 - 25715.078: 99.3572% ( 2) 00:13:44.408 25715.078 - 25839.909: 99.3671% ( 1) 00:13:44.408 28711.010 - 28835.840: 99.3770% ( 1) 00:13:44.408 30833.128 - 30957.958: 99.3968% ( 2) 00:13:44.408 30957.958 - 31082.789: 99.4264% ( 3) 00:13:44.408 31082.789 - 31207.619: 99.4660% ( 4) 00:13:44.408 31207.619 - 31332.450: 99.4956% ( 3) 00:13:44.408 31332.450 - 31457.280: 99.5154% ( 2) 00:13:44.408 31457.280 - 31582.110: 99.5451% ( 3) 00:13:44.408 31582.110 - 31706.941: 99.5748% ( 3) 00:13:44.408 31706.941 - 31831.771: 99.6044% ( 3) 00:13:44.408 31831.771 - 31956.602: 99.6341% ( 3) 00:13:44.408 31956.602 - 32206.263: 99.6934% ( 6) 00:13:44.408 32206.263 - 32455.924: 99.7528% ( 6) 00:13:44.408 32455.924 - 32705.585: 99.8121% ( 6) 00:13:44.408 32705.585 - 32955.246: 99.8714% ( 6) 00:13:44.408 32955.246 - 33204.907: 99.9308% ( 6) 00:13:44.408 33204.907 - 33454.568: 99.9901% ( 6) 00:13:44.408 33454.568 - 33704.229: 100.0000% ( 1) 00:13:44.408 00:13:44.408 13:39:38 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:13:44.408 00:13:44.408 real 0m2.879s 00:13:44.408 user 0m2.382s 00:13:44.408 sys 0m0.387s 00:13:44.408 13:39:38 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.408 13:39:38 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:13:44.408 ************************************ 00:13:44.408 END TEST nvme_perf 00:13:44.408 ************************************ 00:13:44.666 13:39:38 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:44.667 13:39:38 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:44.667 13:39:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.667 13:39:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 ************************************ 00:13:44.667 START TEST nvme_hello_world 00:13:44.667 ************************************ 00:13:44.667 13:39:38 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:44.926 Initializing NVMe Controllers 00:13:44.926 Attached to 0000:00:13.0 00:13:44.926 Namespace ID: 1 size: 1GB 00:13:44.926 Attached to 0000:00:10.0 00:13:44.926 Namespace ID: 1 size: 6GB 00:13:44.926 Attached to 0000:00:11.0 00:13:44.926 Namespace ID: 1 size: 5GB 00:13:44.926 Attached to 0000:00:12.0 00:13:44.926 Namespace ID: 1 size: 4GB 00:13:44.926 Namespace ID: 2 size: 4GB 00:13:44.926 Namespace ID: 3 size: 4GB 00:13:44.926 Initialization complete. 00:13:44.926 INFO: using host memory buffer for IO 00:13:44.926 Hello world! 00:13:44.926 INFO: using host memory buffer for IO 00:13:44.926 Hello world! 00:13:44.926 INFO: using host memory buffer for IO 00:13:44.926 Hello world! 00:13:44.926 INFO: using host memory buffer for IO 00:13:44.926 Hello world! 00:13:44.926 INFO: using host memory buffer for IO 00:13:44.926 Hello world! 00:13:44.926 INFO: using host memory buffer for IO 00:13:44.926 Hello world! 00:13:44.926 00:13:44.926 real 0m0.395s 00:13:44.926 user 0m0.145s 00:13:44.926 sys 0m0.202s 00:13:44.926 13:39:38 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.926 13:39:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:44.926 ************************************ 00:13:44.926 END TEST nvme_hello_world 00:13:44.926 ************************************ 00:13:44.926 13:39:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:44.926 13:39:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:44.926 13:39:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.926 13:39:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.926 ************************************ 00:13:44.926 START TEST nvme_sgl 00:13:44.926 ************************************ 00:13:44.926 13:39:38 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:45.492 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:13:45.492 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:13:45.492 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:13:45.492 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:13:45.492 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:13:45.492 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:13:45.492 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:13:45.492 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:13:45.492 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:13:45.492 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:13:45.492 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:13:45.492 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:13:45.492 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:13:45.492 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:13:45.492 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:13:45.492 NVMe Readv/Writev Request test 00:13:45.492 Attached to 0000:00:13.0 00:13:45.492 Attached to 0000:00:10.0 00:13:45.492 Attached to 0000:00:11.0 00:13:45.492 Attached to 0000:00:12.0 00:13:45.492 0000:00:10.0: build_io_request_2 test passed 00:13:45.492 0000:00:10.0: build_io_request_4 test passed 00:13:45.492 0000:00:10.0: build_io_request_5 test passed 00:13:45.492 0000:00:10.0: build_io_request_6 test passed 00:13:45.492 0000:00:10.0: build_io_request_7 test passed 00:13:45.492 0000:00:10.0: build_io_request_10 test passed 00:13:45.492 0000:00:11.0: build_io_request_2 test passed 00:13:45.492 0000:00:11.0: build_io_request_4 test passed 00:13:45.492 0000:00:11.0: build_io_request_5 test passed 00:13:45.492 0000:00:11.0: build_io_request_6 test passed 00:13:45.492 0000:00:11.0: build_io_request_7 test passed 00:13:45.492 0000:00:11.0: build_io_request_10 test passed 00:13:45.492 Cleaning up... 00:13:45.492 00:13:45.492 real 0m0.489s 00:13:45.492 user 0m0.270s 00:13:45.492 sys 0m0.175s 00:13:45.492 13:39:39 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.492 13:39:39 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:13:45.492 ************************************ 00:13:45.492 END TEST nvme_sgl 00:13:45.492 ************************************ 00:13:45.492 13:39:39 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:45.492 13:39:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:45.492 13:39:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.492 13:39:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.492 ************************************ 00:13:45.492 START TEST nvme_e2edp 00:13:45.492 ************************************ 00:13:45.492 13:39:39 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:46.105 NVMe Write/Read with End-to-End data protection test 00:13:46.105 Attached to 0000:00:13.0 00:13:46.105 Attached to 0000:00:10.0 00:13:46.105 Attached to 0000:00:11.0 00:13:46.105 Attached to 0000:00:12.0 00:13:46.105 Cleaning up... 00:13:46.105 00:13:46.105 real 0m0.368s 00:13:46.105 user 0m0.141s 00:13:46.105 sys 0m0.182s 00:13:46.105 13:39:39 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.105 ************************************ 00:13:46.105 END TEST nvme_e2edp 00:13:46.105 13:39:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:13:46.105 ************************************ 00:13:46.105 13:39:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:46.105 13:39:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:46.105 13:39:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.105 13:39:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.105 ************************************ 00:13:46.105 START TEST nvme_reserve 00:13:46.105 ************************************ 00:13:46.105 13:39:39 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:46.364 ===================================================== 00:13:46.364 NVMe Controller at PCI bus 0, device 19, function 0 00:13:46.364 ===================================================== 00:13:46.364 Reservations: Not Supported 00:13:46.364 ===================================================== 00:13:46.364 NVMe Controller at PCI bus 0, device 16, function 0 00:13:46.364 ===================================================== 00:13:46.364 Reservations: Not Supported 00:13:46.364 ===================================================== 00:13:46.364 NVMe Controller at PCI bus 0, device 17, function 0 00:13:46.364 ===================================================== 00:13:46.364 Reservations: Not Supported 00:13:46.364 ===================================================== 00:13:46.364 NVMe Controller at PCI bus 0, device 18, function 0 00:13:46.364 ===================================================== 00:13:46.364 Reservations: Not Supported 00:13:46.364 Reservation test passed 00:13:46.364 00:13:46.364 real 0m0.338s 00:13:46.364 user 0m0.126s 00:13:46.364 sys 0m0.162s 00:13:46.364 13:39:40 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.364 13:39:40 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:13:46.364 ************************************ 00:13:46.364 END TEST nvme_reserve 00:13:46.364 ************************************ 00:13:46.364 13:39:40 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:46.364 13:39:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:46.364 13:39:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.364 13:39:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.364 ************************************ 00:13:46.364 START TEST nvme_err_injection 00:13:46.364 ************************************ 00:13:46.364 13:39:40 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:46.931 NVMe Error Injection test 00:13:46.931 Attached to 0000:00:13.0 00:13:46.931 Attached to 0000:00:10.0 00:13:46.931 Attached to 0000:00:11.0 00:13:46.931 Attached to 0000:00:12.0 00:13:46.931 0000:00:13.0: get features failed as expected 00:13:46.931 0000:00:10.0: get features failed as expected 00:13:46.931 0000:00:11.0: get features failed as expected 00:13:46.931 0000:00:12.0: get features failed as expected 00:13:46.931 0000:00:13.0: get features successfully as expected 00:13:46.931 0000:00:10.0: get features successfully as expected 00:13:46.931 0000:00:11.0: get features successfully as expected 00:13:46.931 0000:00:12.0: get features successfully as expected 00:13:46.931 0000:00:13.0: read failed as expected 00:13:46.931 0000:00:10.0: read failed as expected 00:13:46.931 0000:00:11.0: read failed as expected 00:13:46.931 0000:00:12.0: read failed as expected 00:13:46.931 0000:00:13.0: read successfully as expected 00:13:46.931 0000:00:10.0: read successfully as expected 00:13:46.931 0000:00:11.0: read successfully as expected 00:13:46.931 0000:00:12.0: read successfully as expected 00:13:46.931 Cleaning up... 00:13:46.931 00:13:46.931 real 0m0.411s 00:13:46.931 user 0m0.152s 00:13:46.931 sys 0m0.191s 00:13:46.931 13:39:40 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.931 13:39:40 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:13:46.931 ************************************ 00:13:46.931 END TEST nvme_err_injection 00:13:46.931 ************************************ 00:13:46.931 13:39:40 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:46.931 13:39:40 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:13:46.931 13:39:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.931 13:39:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.931 ************************************ 00:13:46.931 START TEST nvme_overhead 00:13:46.931 ************************************ 00:13:46.931 13:39:40 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:48.306 Initializing NVMe Controllers 00:13:48.306 Attached to 0000:00:13.0 00:13:48.306 Attached to 0000:00:10.0 00:13:48.306 Attached to 0000:00:11.0 00:13:48.306 Attached to 0000:00:12.0 00:13:48.306 Initialization complete. Launching workers. 00:13:48.306 submit (in ns) avg, min, max = 18138.8, 14024.8, 135703.8 00:13:48.306 complete (in ns) avg, min, max = 12371.2, 9096.2, 229973.3 00:13:48.306 00:13:48.306 Submit histogram 00:13:48.306 ================ 00:13:48.306 Range in us Cumulative Count 00:13:48.306 14.019 - 14.080: 0.0228% ( 2) 00:13:48.306 14.080 - 14.141: 0.0457% ( 2) 00:13:48.306 14.141 - 14.202: 0.1142% ( 6) 00:13:48.306 14.202 - 14.263: 0.1370% ( 2) 00:13:48.306 14.263 - 14.324: 0.1712% ( 3) 00:13:48.306 14.324 - 14.385: 0.2055% ( 3) 00:13:48.306 14.385 - 14.446: 0.2169% ( 1) 00:13:48.306 14.446 - 14.507: 0.2397% ( 2) 00:13:48.306 14.507 - 14.568: 0.2626% ( 2) 00:13:48.306 14.568 - 14.629: 0.2740% ( 1) 00:13:48.306 14.629 - 14.690: 0.3082% ( 3) 00:13:48.306 14.690 - 14.750: 0.3311% ( 2) 00:13:48.306 14.811 - 14.872: 0.3767% ( 4) 00:13:48.306 14.872 - 14.933: 0.4452% ( 6) 00:13:48.306 14.933 - 14.994: 0.7420% ( 26) 00:13:48.306 14.994 - 15.055: 1.3813% ( 56) 00:13:48.306 15.055 - 15.116: 2.0548% ( 59) 00:13:48.306 15.116 - 15.177: 3.3676% ( 115) 00:13:48.306 15.177 - 15.238: 4.8516% ( 130) 00:13:48.306 15.238 - 15.299: 6.4269% ( 138) 00:13:48.306 15.299 - 15.360: 7.8995% ( 129) 00:13:48.306 15.360 - 15.421: 9.2922% ( 122) 00:13:48.306 15.421 - 15.482: 10.3311% ( 91) 00:13:48.306 15.482 - 15.543: 11.2671% ( 82) 00:13:48.306 15.543 - 15.604: 12.2260% ( 84) 00:13:48.306 15.604 - 15.726: 13.6187% ( 122) 00:13:48.306 15.726 - 15.848: 16.2557% ( 231) 00:13:48.306 15.848 - 15.970: 21.7352% ( 480) 00:13:48.306 15.970 - 16.091: 28.8128% ( 620) 00:13:48.306 16.091 - 16.213: 36.1301% ( 641) 00:13:48.306 16.213 - 16.335: 42.2374% ( 535) 00:13:48.306 16.335 - 16.457: 47.0662% ( 423) 00:13:48.306 16.457 - 16.579: 51.8493% ( 419) 00:13:48.306 16.579 - 16.701: 55.8219% ( 348) 00:13:48.306 16.701 - 16.823: 59.0639% ( 284) 00:13:48.306 16.823 - 16.945: 61.5525% ( 218) 00:13:48.306 16.945 - 17.067: 63.1393% ( 139) 00:13:48.306 17.067 - 17.189: 64.2808% ( 100) 00:13:48.306 17.189 - 17.310: 64.9886% ( 62) 00:13:48.307 17.310 - 17.432: 65.5479% ( 49) 00:13:48.307 17.432 - 17.554: 66.0616% ( 45) 00:13:48.307 17.554 - 17.676: 66.4612% ( 35) 00:13:48.307 17.676 - 17.798: 66.9064% ( 39) 00:13:48.307 17.798 - 17.920: 67.2374% ( 29) 00:13:48.307 17.920 - 18.042: 67.5913% ( 31) 00:13:48.307 18.042 - 18.164: 67.6941% ( 9) 00:13:48.307 18.164 - 18.286: 67.8539% ( 14) 00:13:48.307 18.286 - 18.408: 67.9452% ( 8) 00:13:48.307 18.408 - 18.530: 68.0023% ( 5) 00:13:48.307 18.530 - 18.651: 68.2534% ( 22) 00:13:48.307 18.651 - 18.773: 68.8356% ( 51) 00:13:48.307 18.773 - 18.895: 70.3082% ( 129) 00:13:48.307 18.895 - 19.017: 72.0320% ( 151) 00:13:48.307 19.017 - 19.139: 73.2078% ( 103) 00:13:48.307 19.139 - 19.261: 74.0411% ( 73) 00:13:48.307 19.261 - 19.383: 74.8288% ( 69) 00:13:48.307 19.383 - 19.505: 75.4680% ( 56) 00:13:48.307 19.505 - 19.627: 75.9932% ( 46) 00:13:48.307 19.627 - 19.749: 76.2785% ( 25) 00:13:48.307 19.749 - 19.870: 76.6895% ( 36) 00:13:48.307 19.870 - 19.992: 76.9064% ( 19) 00:13:48.307 19.992 - 20.114: 76.9749% ( 6) 00:13:48.307 20.114 - 20.236: 77.0662% ( 8) 00:13:48.307 20.236 - 20.358: 77.0776% ( 1) 00:13:48.307 20.358 - 20.480: 77.1575% ( 7) 00:13:48.307 20.480 - 20.602: 77.4543% ( 26) 00:13:48.307 20.602 - 20.724: 79.1324% ( 147) 00:13:48.307 20.724 - 20.846: 82.1804% ( 267) 00:13:48.307 20.846 - 20.968: 85.3881% ( 281) 00:13:48.307 20.968 - 21.090: 87.9909% ( 228) 00:13:48.307 21.090 - 21.211: 89.7831% ( 157) 00:13:48.307 21.211 - 21.333: 90.9817% ( 105) 00:13:48.307 21.333 - 21.455: 91.7237% ( 65) 00:13:48.307 21.455 - 21.577: 92.0890% ( 32) 00:13:48.307 21.577 - 21.699: 92.2032% ( 10) 00:13:48.307 21.699 - 21.821: 92.3630% ( 14) 00:13:48.307 21.821 - 21.943: 92.5799% ( 19) 00:13:48.307 21.943 - 22.065: 92.7511% ( 15) 00:13:48.307 22.065 - 22.187: 92.9338% ( 16) 00:13:48.307 22.187 - 22.309: 93.1164% ( 16) 00:13:48.307 22.309 - 22.430: 93.2763% ( 14) 00:13:48.307 22.430 - 22.552: 93.4361% ( 14) 00:13:48.307 22.552 - 22.674: 93.5731% ( 12) 00:13:48.307 22.674 - 22.796: 93.6644% ( 8) 00:13:48.307 22.796 - 22.918: 93.7443% ( 7) 00:13:48.307 22.918 - 23.040: 93.8584% ( 10) 00:13:48.307 23.040 - 23.162: 93.9269% ( 6) 00:13:48.307 23.162 - 23.284: 94.0183% ( 8) 00:13:48.307 23.284 - 23.406: 94.1096% ( 8) 00:13:48.307 23.406 - 23.528: 94.1781% ( 6) 00:13:48.307 23.528 - 23.650: 94.2009% ( 2) 00:13:48.307 23.650 - 23.771: 94.2466% ( 4) 00:13:48.307 23.771 - 23.893: 94.3265% ( 7) 00:13:48.307 23.893 - 24.015: 94.3950% ( 6) 00:13:48.307 24.015 - 24.137: 94.4863% ( 8) 00:13:48.307 24.137 - 24.259: 94.6005% ( 10) 00:13:48.307 24.259 - 24.381: 94.7032% ( 9) 00:13:48.307 24.381 - 24.503: 94.8288% ( 11) 00:13:48.307 24.503 - 24.625: 94.8973% ( 6) 00:13:48.307 24.625 - 24.747: 94.9658% ( 6) 00:13:48.307 24.747 - 24.869: 95.0457% ( 7) 00:13:48.307 24.869 - 24.990: 95.0685% ( 2) 00:13:48.307 24.990 - 25.112: 95.1370% ( 6) 00:13:48.307 25.112 - 25.234: 95.1712% ( 3) 00:13:48.307 25.234 - 25.356: 95.2740% ( 9) 00:13:48.307 25.356 - 25.478: 95.3082% ( 3) 00:13:48.307 25.478 - 25.600: 95.3311% ( 2) 00:13:48.307 25.600 - 25.722: 95.3881% ( 5) 00:13:48.307 25.722 - 25.844: 95.4452% ( 5) 00:13:48.307 25.844 - 25.966: 95.4795% ( 3) 00:13:48.307 25.966 - 26.088: 95.5137% ( 3) 00:13:48.307 26.088 - 26.210: 95.5251% ( 1) 00:13:48.307 26.210 - 26.331: 95.5479% ( 2) 00:13:48.307 26.331 - 26.453: 95.5936% ( 4) 00:13:48.307 26.453 - 26.575: 95.6621% ( 6) 00:13:48.307 26.575 - 26.697: 95.7078% ( 4) 00:13:48.307 26.697 - 26.819: 95.7877% ( 7) 00:13:48.307 26.819 - 26.941: 95.8447% ( 5) 00:13:48.307 26.941 - 27.063: 95.8676% ( 2) 00:13:48.307 27.063 - 27.185: 95.9018% ( 3) 00:13:48.307 27.185 - 27.307: 95.9475% ( 4) 00:13:48.307 27.307 - 27.429: 96.0160% ( 6) 00:13:48.307 27.429 - 27.550: 96.1187% ( 9) 00:13:48.307 27.550 - 27.672: 96.1872% ( 6) 00:13:48.307 27.672 - 27.794: 96.2671% ( 7) 00:13:48.307 27.794 - 27.916: 96.3584% ( 8) 00:13:48.307 27.916 - 28.038: 96.4840% ( 11) 00:13:48.307 28.038 - 28.160: 96.5525% ( 6) 00:13:48.307 28.160 - 28.282: 96.5753% ( 2) 00:13:48.307 28.282 - 28.404: 96.6781% ( 9) 00:13:48.307 28.404 - 28.526: 96.7580% ( 7) 00:13:48.307 28.526 - 28.648: 96.8265% ( 6) 00:13:48.307 28.648 - 28.770: 96.9521% ( 11) 00:13:48.307 28.770 - 28.891: 97.0091% ( 5) 00:13:48.307 28.891 - 29.013: 97.0662% ( 5) 00:13:48.307 29.013 - 29.135: 97.1575% ( 8) 00:13:48.307 29.135 - 29.257: 97.2032% ( 4) 00:13:48.307 29.257 - 29.379: 97.2260% ( 2) 00:13:48.307 29.379 - 29.501: 97.2831% ( 5) 00:13:48.307 29.501 - 29.623: 97.3288% ( 4) 00:13:48.307 29.623 - 29.745: 97.3858% ( 5) 00:13:48.307 29.745 - 29.867: 97.4315% ( 4) 00:13:48.307 29.867 - 29.989: 97.4886% ( 5) 00:13:48.307 29.989 - 30.110: 97.5571% ( 6) 00:13:48.307 30.110 - 30.232: 97.6142% ( 5) 00:13:48.307 30.232 - 30.354: 97.6484% ( 3) 00:13:48.307 30.354 - 30.476: 97.6598% ( 1) 00:13:48.307 30.476 - 30.598: 97.7055% ( 4) 00:13:48.307 30.598 - 30.720: 97.7511% ( 4) 00:13:48.307 30.720 - 30.842: 97.7740% ( 2) 00:13:48.307 30.842 - 30.964: 97.7968% ( 2) 00:13:48.307 31.086 - 31.208: 97.8311% ( 3) 00:13:48.307 31.208 - 31.451: 97.8767% ( 4) 00:13:48.307 31.451 - 31.695: 97.8881% ( 1) 00:13:48.307 31.695 - 31.939: 97.9110% ( 2) 00:13:48.307 31.939 - 32.183: 97.9338% ( 2) 00:13:48.307 32.183 - 32.427: 98.0023% ( 6) 00:13:48.307 32.427 - 32.670: 98.0822% ( 7) 00:13:48.307 32.670 - 32.914: 98.1393% ( 5) 00:13:48.307 32.914 - 33.158: 98.1963% ( 5) 00:13:48.307 33.158 - 33.402: 98.3676% ( 15) 00:13:48.307 33.402 - 33.646: 98.4475% ( 7) 00:13:48.307 33.646 - 33.890: 98.5388% ( 8) 00:13:48.307 33.890 - 34.133: 98.6872% ( 13) 00:13:48.307 34.133 - 34.377: 99.0639% ( 33) 00:13:48.307 34.377 - 34.621: 99.2808% ( 19) 00:13:48.307 34.621 - 34.865: 99.4406% ( 14) 00:13:48.307 34.865 - 35.109: 99.5091% ( 6) 00:13:48.307 35.109 - 35.352: 99.5320% ( 2) 00:13:48.307 35.352 - 35.596: 99.5890% ( 5) 00:13:48.307 35.596 - 35.840: 99.6233% ( 3) 00:13:48.307 35.840 - 36.084: 99.6461% ( 2) 00:13:48.307 36.084 - 36.328: 99.6689% ( 2) 00:13:48.307 36.328 - 36.571: 99.6804% ( 1) 00:13:48.307 36.571 - 36.815: 99.7032% ( 2) 00:13:48.307 37.303 - 37.547: 99.7146% ( 1) 00:13:48.307 38.766 - 39.010: 99.7260% ( 1) 00:13:48.307 39.253 - 39.497: 99.7374% ( 1) 00:13:48.307 39.497 - 39.741: 99.7489% ( 1) 00:13:48.307 39.985 - 40.229: 99.7603% ( 1) 00:13:48.307 40.229 - 40.472: 99.8059% ( 4) 00:13:48.307 40.716 - 40.960: 99.8174% ( 1) 00:13:48.307 41.204 - 41.448: 99.8288% ( 1) 00:13:48.307 41.691 - 41.935: 99.8402% ( 1) 00:13:48.307 43.154 - 43.398: 99.8516% ( 1) 00:13:48.307 45.592 - 45.836: 99.8630% ( 1) 00:13:48.307 46.568 - 46.811: 99.8744% ( 1) 00:13:48.307 47.299 - 47.543: 99.8858% ( 1) 00:13:48.307 47.543 - 47.787: 99.8973% ( 1) 00:13:48.307 48.518 - 48.762: 99.9087% ( 1) 00:13:48.307 52.663 - 52.907: 99.9201% ( 1) 00:13:48.307 53.638 - 53.882: 99.9429% ( 2) 00:13:48.307 54.857 - 55.101: 99.9543% ( 1) 00:13:48.307 58.758 - 59.002: 99.9658% ( 1) 00:13:48.307 65.829 - 66.316: 99.9772% ( 1) 00:13:48.307 67.779 - 68.267: 99.9886% ( 1) 00:13:48.307 135.558 - 136.533: 100.0000% ( 1) 00:13:48.307 00:13:48.307 Complete histogram 00:13:48.307 ================== 00:13:48.307 Range in us Cumulative Count 00:13:48.307 9.082 - 9.143: 0.0114% ( 1) 00:13:48.307 9.448 - 9.509: 0.0685% ( 5) 00:13:48.307 9.509 - 9.570: 0.1484% ( 7) 00:13:48.307 9.570 - 9.630: 0.6621% ( 45) 00:13:48.307 9.630 - 9.691: 1.3699% ( 62) 00:13:48.307 9.691 - 9.752: 1.8836% ( 45) 00:13:48.307 9.752 - 9.813: 2.2945% ( 36) 00:13:48.307 9.813 - 9.874: 2.7055% ( 36) 00:13:48.307 9.874 - 9.935: 2.9909% ( 25) 00:13:48.307 9.935 - 9.996: 3.1279% ( 12) 00:13:48.307 9.996 - 10.057: 3.2648% ( 12) 00:13:48.307 10.057 - 10.118: 3.3676% ( 9) 00:13:48.307 10.118 - 10.179: 3.4475% ( 7) 00:13:48.307 10.179 - 10.240: 4.9772% ( 134) 00:13:48.307 10.240 - 10.301: 9.5662% ( 402) 00:13:48.307 10.301 - 10.362: 13.6872% ( 361) 00:13:48.307 10.362 - 10.423: 16.9406% ( 285) 00:13:48.307 10.423 - 10.484: 19.2009% ( 198) 00:13:48.307 10.484 - 10.545: 20.7306% ( 134) 00:13:48.307 10.545 - 10.606: 21.9406% ( 106) 00:13:48.307 10.606 - 10.667: 22.7397% ( 70) 00:13:48.307 10.667 - 10.728: 23.3447% ( 53) 00:13:48.307 10.728 - 10.789: 23.9840% ( 56) 00:13:48.307 10.789 - 10.850: 26.0388% ( 180) 00:13:48.307 10.850 - 10.910: 34.1438% ( 710) 00:13:48.307 10.910 - 10.971: 41.6324% ( 656) 00:13:48.307 10.971 - 11.032: 46.5525% ( 431) 00:13:48.307 11.032 - 11.093: 50.0000% ( 302) 00:13:48.307 11.093 - 11.154: 52.0890% ( 183) 00:13:48.308 11.154 - 11.215: 54.3493% ( 198) 00:13:48.308 11.215 - 11.276: 56.7352% ( 209) 00:13:48.308 11.276 - 11.337: 58.7785% ( 179) 00:13:48.308 11.337 - 11.398: 60.0685% ( 113) 00:13:48.308 11.398 - 11.459: 60.9932% ( 81) 00:13:48.308 11.459 - 11.520: 61.5639% ( 50) 00:13:48.308 11.520 - 11.581: 62.1804% ( 54) 00:13:48.308 11.581 - 11.642: 63.0365% ( 75) 00:13:48.308 11.642 - 11.703: 63.8699% ( 73) 00:13:48.308 11.703 - 11.764: 64.5776% ( 62) 00:13:48.308 11.764 - 11.825: 65.0799% ( 44) 00:13:48.308 11.825 - 11.886: 65.4338% ( 31) 00:13:48.308 11.886 - 11.947: 65.6507% ( 19) 00:13:48.308 11.947 - 12.008: 66.2557% ( 53) 00:13:48.308 12.008 - 12.069: 67.1005% ( 74) 00:13:48.308 12.069 - 12.130: 67.6826% ( 51) 00:13:48.308 12.130 - 12.190: 67.9795% ( 26) 00:13:48.308 12.190 - 12.251: 68.3562% ( 33) 00:13:48.308 12.251 - 12.312: 68.5274% ( 15) 00:13:48.308 12.312 - 12.373: 68.6416% ( 10) 00:13:48.308 12.373 - 12.434: 68.9384% ( 26) 00:13:48.308 12.434 - 12.495: 69.1667% ( 20) 00:13:48.308 12.495 - 12.556: 69.3379% ( 15) 00:13:48.308 12.556 - 12.617: 70.4566% ( 98) 00:13:48.308 12.617 - 12.678: 72.8653% ( 211) 00:13:48.308 12.678 - 12.739: 74.5662% ( 149) 00:13:48.308 12.739 - 12.800: 75.7877% ( 107) 00:13:48.308 12.800 - 12.861: 76.3699% ( 51) 00:13:48.308 12.861 - 12.922: 76.7009% ( 29) 00:13:48.308 12.922 - 12.983: 76.9406% ( 21) 00:13:48.308 12.983 - 13.044: 77.1119% ( 15) 00:13:48.308 13.044 - 13.105: 77.4315% ( 28) 00:13:48.308 13.105 - 13.166: 77.6598% ( 20) 00:13:48.308 13.166 - 13.227: 77.8196% ( 14) 00:13:48.308 13.227 - 13.288: 77.8653% ( 4) 00:13:48.308 13.288 - 13.349: 77.8881% ( 2) 00:13:48.308 13.349 - 13.410: 77.9909% ( 9) 00:13:48.308 13.410 - 13.470: 78.1507% ( 14) 00:13:48.308 13.470 - 13.531: 78.4018% ( 22) 00:13:48.308 13.531 - 13.592: 78.5616% ( 14) 00:13:48.308 13.592 - 13.653: 78.6872% ( 11) 00:13:48.308 13.653 - 13.714: 78.7443% ( 5) 00:13:48.308 13.714 - 13.775: 78.7557% ( 1) 00:13:48.308 13.775 - 13.836: 78.7785% ( 2) 00:13:48.308 13.836 - 13.897: 78.7900% ( 1) 00:13:48.308 13.897 - 13.958: 78.8242% ( 3) 00:13:48.308 13.958 - 14.019: 78.8584% ( 3) 00:13:48.308 14.080 - 14.141: 78.8813% ( 2) 00:13:48.308 14.141 - 14.202: 78.8927% ( 1) 00:13:48.308 14.202 - 14.263: 78.9384% ( 4) 00:13:48.308 14.324 - 14.385: 78.9726% ( 3) 00:13:48.308 14.385 - 14.446: 79.0982% ( 11) 00:13:48.308 14.446 - 14.507: 79.6233% ( 46) 00:13:48.308 14.507 - 14.568: 81.5868% ( 172) 00:13:48.308 14.568 - 14.629: 84.4292% ( 249) 00:13:48.308 14.629 - 14.690: 86.8607% ( 213) 00:13:48.308 14.690 - 14.750: 88.7329% ( 164) 00:13:48.308 14.750 - 14.811: 90.2854% ( 136) 00:13:48.308 14.811 - 14.872: 91.1187% ( 73) 00:13:48.308 14.872 - 14.933: 91.6553% ( 47) 00:13:48.308 14.933 - 14.994: 92.1005% ( 39) 00:13:48.308 14.994 - 15.055: 92.3402% ( 21) 00:13:48.308 15.055 - 15.116: 92.6484% ( 27) 00:13:48.308 15.116 - 15.177: 92.8082% ( 14) 00:13:48.308 15.177 - 15.238: 93.0137% ( 18) 00:13:48.308 15.238 - 15.299: 93.1735% ( 14) 00:13:48.308 15.299 - 15.360: 93.1963% ( 2) 00:13:48.308 15.360 - 15.421: 93.2192% ( 2) 00:13:48.308 15.421 - 15.482: 93.2877% ( 6) 00:13:48.308 15.482 - 15.543: 93.4703% ( 16) 00:13:48.308 15.543 - 15.604: 93.7785% ( 27) 00:13:48.308 15.604 - 15.726: 94.2580% ( 42) 00:13:48.308 15.726 - 15.848: 94.5776% ( 28) 00:13:48.308 15.848 - 15.970: 94.7374% ( 14) 00:13:48.308 15.970 - 16.091: 94.8516% ( 10) 00:13:48.308 16.091 - 16.213: 94.8973% ( 4) 00:13:48.308 16.213 - 16.335: 94.9315% ( 3) 00:13:48.308 16.335 - 16.457: 95.0114% ( 7) 00:13:48.308 16.457 - 16.579: 95.0571% ( 4) 00:13:48.308 16.579 - 16.701: 95.0913% ( 3) 00:13:48.308 16.701 - 16.823: 95.1142% ( 2) 00:13:48.308 16.823 - 16.945: 95.1826% ( 6) 00:13:48.308 16.945 - 17.067: 95.2169% ( 3) 00:13:48.308 17.067 - 17.189: 95.2511% ( 3) 00:13:48.308 17.189 - 17.310: 95.3196% ( 6) 00:13:48.308 17.310 - 17.432: 95.3881% ( 6) 00:13:48.308 17.432 - 17.554: 95.3995% ( 1) 00:13:48.308 17.554 - 17.676: 95.4224% ( 2) 00:13:48.308 17.676 - 17.798: 95.4909% ( 6) 00:13:48.308 17.798 - 17.920: 95.5479% ( 5) 00:13:48.308 17.920 - 18.042: 95.5708% ( 2) 00:13:48.308 18.042 - 18.164: 95.6050% ( 3) 00:13:48.308 18.164 - 18.286: 95.6849% ( 7) 00:13:48.308 18.286 - 18.408: 95.6963% ( 1) 00:13:48.308 18.408 - 18.530: 95.7306% ( 3) 00:13:48.308 18.530 - 18.651: 95.7763% ( 4) 00:13:48.308 18.651 - 18.773: 95.7991% ( 2) 00:13:48.308 18.773 - 18.895: 95.8562% ( 5) 00:13:48.308 18.895 - 19.017: 95.9361% ( 7) 00:13:48.308 19.017 - 19.139: 95.9589% ( 2) 00:13:48.308 19.139 - 19.261: 95.9932% ( 3) 00:13:48.308 19.261 - 19.383: 96.0160% ( 2) 00:13:48.308 19.383 - 19.505: 96.0502% ( 3) 00:13:48.308 19.505 - 19.627: 96.1073% ( 5) 00:13:48.308 19.749 - 19.870: 96.1187% ( 1) 00:13:48.308 19.870 - 19.992: 96.2100% ( 8) 00:13:48.308 19.992 - 20.114: 96.2557% ( 4) 00:13:48.308 20.114 - 20.236: 96.3014% ( 4) 00:13:48.308 20.236 - 20.358: 96.3242% ( 2) 00:13:48.308 20.358 - 20.480: 96.3470% ( 2) 00:13:48.308 20.480 - 20.602: 96.3927% ( 4) 00:13:48.308 20.602 - 20.724: 96.4269% ( 3) 00:13:48.308 20.724 - 20.846: 96.4384% ( 1) 00:13:48.308 20.846 - 20.968: 96.4840% ( 4) 00:13:48.308 20.968 - 21.090: 96.5068% ( 2) 00:13:48.308 21.090 - 21.211: 96.6210% ( 10) 00:13:48.308 21.211 - 21.333: 96.7237% ( 9) 00:13:48.308 21.333 - 21.455: 96.8265% ( 9) 00:13:48.308 21.455 - 21.577: 96.8950% ( 6) 00:13:48.308 21.577 - 21.699: 96.9406% ( 4) 00:13:48.308 21.699 - 21.821: 96.9749% ( 3) 00:13:48.308 21.943 - 22.065: 97.0548% ( 7) 00:13:48.308 22.065 - 22.187: 97.1461% ( 8) 00:13:48.308 22.187 - 22.309: 97.2032% ( 5) 00:13:48.308 22.309 - 22.430: 97.2831% ( 7) 00:13:48.308 22.430 - 22.552: 97.3516% ( 6) 00:13:48.308 22.552 - 22.674: 97.4087% ( 5) 00:13:48.308 22.674 - 22.796: 97.5457% ( 12) 00:13:48.308 22.796 - 22.918: 97.5913% ( 4) 00:13:48.308 22.918 - 23.040: 97.6256% ( 3) 00:13:48.308 23.040 - 23.162: 97.6598% ( 3) 00:13:48.308 23.162 - 23.284: 97.7169% ( 5) 00:13:48.308 23.284 - 23.406: 97.7854% ( 6) 00:13:48.308 23.406 - 23.528: 97.8539% ( 6) 00:13:48.308 23.528 - 23.650: 97.8767% ( 2) 00:13:48.308 23.650 - 23.771: 97.9224% ( 4) 00:13:48.308 23.771 - 23.893: 97.9566% ( 3) 00:13:48.308 23.893 - 24.015: 97.9680% ( 1) 00:13:48.308 24.015 - 24.137: 97.9795% ( 1) 00:13:48.308 24.259 - 24.381: 98.0023% ( 2) 00:13:48.308 24.381 - 24.503: 98.0936% ( 8) 00:13:48.308 24.503 - 24.625: 98.2877% ( 17) 00:13:48.308 24.625 - 24.747: 98.4817% ( 17) 00:13:48.308 24.747 - 24.869: 98.6301% ( 13) 00:13:48.308 24.869 - 24.990: 98.7329% ( 9) 00:13:48.308 24.990 - 25.112: 98.8014% ( 6) 00:13:48.308 25.112 - 25.234: 98.8470% ( 4) 00:13:48.308 25.234 - 25.356: 98.8699% ( 2) 00:13:48.308 25.356 - 25.478: 98.9269% ( 5) 00:13:48.308 25.600 - 25.722: 98.9384% ( 1) 00:13:48.308 25.722 - 25.844: 98.9612% ( 2) 00:13:48.308 25.844 - 25.966: 99.0183% ( 5) 00:13:48.308 25.966 - 26.088: 99.0753% ( 5) 00:13:48.308 26.088 - 26.210: 99.0868% ( 1) 00:13:48.308 26.210 - 26.331: 99.1096% ( 2) 00:13:48.308 26.331 - 26.453: 99.1553% ( 4) 00:13:48.308 26.453 - 26.575: 99.1781% ( 2) 00:13:48.308 26.575 - 26.697: 99.2466% ( 6) 00:13:48.308 26.697 - 26.819: 99.2580% ( 1) 00:13:48.308 26.819 - 26.941: 99.2922% ( 3) 00:13:48.308 26.941 - 27.063: 99.3493% ( 5) 00:13:48.308 27.185 - 27.307: 99.3836% ( 3) 00:13:48.308 27.307 - 27.429: 99.3950% ( 1) 00:13:48.308 27.429 - 27.550: 99.4178% ( 2) 00:13:48.308 27.550 - 27.672: 99.4406% ( 2) 00:13:48.308 27.672 - 27.794: 99.4521% ( 1) 00:13:48.308 27.794 - 27.916: 99.4635% ( 1) 00:13:48.308 27.916 - 28.038: 99.4749% ( 1) 00:13:48.308 28.038 - 28.160: 99.4977% ( 2) 00:13:48.308 28.160 - 28.282: 99.5205% ( 2) 00:13:48.308 28.282 - 28.404: 99.5434% ( 2) 00:13:48.308 28.404 - 28.526: 99.5890% ( 4) 00:13:48.308 28.526 - 28.648: 99.6347% ( 4) 00:13:48.308 28.648 - 28.770: 99.6575% ( 2) 00:13:48.308 28.770 - 28.891: 99.6918% ( 3) 00:13:48.308 28.891 - 29.013: 99.7146% ( 2) 00:13:48.308 29.013 - 29.135: 99.7260% ( 1) 00:13:48.308 29.135 - 29.257: 99.7374% ( 1) 00:13:48.308 29.257 - 29.379: 99.7603% ( 2) 00:13:48.308 29.745 - 29.867: 99.7717% ( 1) 00:13:48.308 30.720 - 30.842: 99.7831% ( 1) 00:13:48.308 32.914 - 33.158: 99.7945% ( 1) 00:13:48.308 33.890 - 34.133: 99.8059% ( 1) 00:13:48.308 34.133 - 34.377: 99.8174% ( 1) 00:13:48.308 36.328 - 36.571: 99.8288% ( 1) 00:13:48.308 36.815 - 37.059: 99.8402% ( 1) 00:13:48.308 39.497 - 39.741: 99.8516% ( 1) 00:13:48.308 39.985 - 40.229: 99.8630% ( 1) 00:13:48.308 40.229 - 40.472: 99.8744% ( 1) 00:13:48.308 42.910 - 43.154: 99.8858% ( 1) 00:13:48.308 43.398 - 43.642: 99.8973% ( 1) 00:13:48.308 44.130 - 44.373: 99.9087% ( 1) 00:13:48.308 45.592 - 45.836: 99.9201% ( 1) 00:13:48.309 47.299 - 47.543: 99.9315% ( 1) 00:13:48.309 51.688 - 51.931: 99.9429% ( 1) 00:13:48.309 58.514 - 58.758: 99.9543% ( 1) 00:13:48.309 73.630 - 74.118: 99.9658% ( 1) 00:13:48.309 87.284 - 87.771: 99.9772% ( 1) 00:13:48.309 132.632 - 133.608: 99.9886% ( 1) 00:13:48.309 229.181 - 230.156: 100.0000% ( 1) 00:13:48.309 00:13:48.309 00:13:48.309 real 0m1.374s 00:13:48.309 user 0m1.132s 00:13:48.309 sys 0m0.190s 00:13:48.309 13:39:42 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:48.309 13:39:42 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:13:48.309 ************************************ 00:13:48.309 END TEST nvme_overhead 00:13:48.309 ************************************ 00:13:48.309 13:39:42 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:48.309 13:39:42 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:13:48.309 13:39:42 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.309 13:39:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.309 ************************************ 00:13:48.309 START TEST nvme_arbitration 00:13:48.309 ************************************ 00:13:48.309 13:39:42 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:52.514 Initializing NVMe Controllers 00:13:52.514 Attached to 0000:00:13.0 00:13:52.514 Attached to 0000:00:10.0 00:13:52.514 Attached to 0000:00:11.0 00:13:52.514 Attached to 0000:00:12.0 00:13:52.514 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:13:52.514 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:13:52.514 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:13:52.514 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:13:52.514 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:13:52.514 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:13:52.514 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:52.514 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:13:52.514 Initialization complete. Launching workers. 00:13:52.514 Starting thread on core 1 with urgent priority queue 00:13:52.514 Starting thread on core 2 with urgent priority queue 00:13:52.514 Starting thread on core 3 with urgent priority queue 00:13:52.514 Starting thread on core 0 with urgent priority queue 00:13:52.514 QEMU NVMe Ctrl (12343 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:13:52.514 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:13:52.514 QEMU NVMe Ctrl (12340 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:13:52.514 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:13:52.514 QEMU NVMe Ctrl (12341 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:13:52.514 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:13:52.514 ======================================================== 00:13:52.514 00:13:52.514 00:13:52.514 real 0m3.556s 00:13:52.514 user 0m9.514s 00:13:52.514 sys 0m0.223s 00:13:52.514 13:39:45 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.514 13:39:45 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:13:52.514 ************************************ 00:13:52.514 END TEST nvme_arbitration 00:13:52.514 ************************************ 00:13:52.514 13:39:45 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:52.514 13:39:45 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:52.514 13:39:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.514 13:39:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.514 ************************************ 00:13:52.514 START TEST nvme_single_aen 00:13:52.514 ************************************ 00:13:52.514 13:39:45 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:52.514 Asynchronous Event Request test 00:13:52.514 Attached to 0000:00:13.0 00:13:52.514 Attached to 0000:00:10.0 00:13:52.514 Attached to 0000:00:11.0 00:13:52.515 Attached to 0000:00:12.0 00:13:52.515 Reset controller to setup AER completions for this process 00:13:52.515 Registering asynchronous event callbacks... 00:13:52.515 Getting orig temperature thresholds of all controllers 00:13:52.515 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:52.515 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:52.515 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:52.515 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:52.515 Setting all controllers temperature threshold low to trigger AER 00:13:52.515 Waiting for all controllers temperature threshold to be set lower 00:13:52.515 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:52.515 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:52.515 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:52.515 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:52.515 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:52.515 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:52.515 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:52.515 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:52.515 Waiting for all controllers to trigger AER and reset threshold 00:13:52.515 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:52.515 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:52.515 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:52.515 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:52.515 Cleaning up... 00:13:52.515 00:13:52.515 real 0m0.303s 00:13:52.515 user 0m0.100s 00:13:52.515 sys 0m0.151s 00:13:52.515 13:39:46 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.515 ************************************ 00:13:52.515 13:39:46 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:13:52.515 END TEST nvme_single_aen 00:13:52.515 ************************************ 00:13:52.515 13:39:46 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:13:52.515 13:39:46 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:52.515 13:39:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.515 13:39:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.515 ************************************ 00:13:52.515 START TEST nvme_doorbell_aers 00:13:52.515 ************************************ 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:52.515 13:39:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:52.515 [2024-11-06 13:39:46.446602] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:02.490 Executing: test_write_invalid_db 00:14:02.490 Waiting for AER completion... 00:14:02.490 Failure: test_write_invalid_db 00:14:02.490 00:14:02.490 Executing: test_invalid_db_write_overflow_sq 00:14:02.490 Waiting for AER completion... 00:14:02.490 Failure: test_invalid_db_write_overflow_sq 00:14:02.490 00:14:02.490 Executing: test_invalid_db_write_overflow_cq 00:14:02.490 Waiting for AER completion... 00:14:02.490 Failure: test_invalid_db_write_overflow_cq 00:14:02.490 00:14:02.490 13:39:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:02.490 13:39:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:02.748 [2024-11-06 13:39:56.589219] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:12.726 Executing: test_write_invalid_db 00:14:12.726 Waiting for AER completion... 00:14:12.726 Failure: test_write_invalid_db 00:14:12.726 00:14:12.726 Executing: test_invalid_db_write_overflow_sq 00:14:12.726 Waiting for AER completion... 00:14:12.726 Failure: test_invalid_db_write_overflow_sq 00:14:12.726 00:14:12.726 Executing: test_invalid_db_write_overflow_cq 00:14:12.726 Waiting for AER completion... 00:14:12.726 Failure: test_invalid_db_write_overflow_cq 00:14:12.726 00:14:12.726 13:40:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:12.727 13:40:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:12.727 [2024-11-06 13:40:06.628633] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:22.703 Executing: test_write_invalid_db 00:14:22.703 Waiting for AER completion... 00:14:22.703 Failure: test_write_invalid_db 00:14:22.703 00:14:22.703 Executing: test_invalid_db_write_overflow_sq 00:14:22.703 Waiting for AER completion... 00:14:22.703 Failure: test_invalid_db_write_overflow_sq 00:14:22.703 00:14:22.703 Executing: test_invalid_db_write_overflow_cq 00:14:22.703 Waiting for AER completion... 00:14:22.703 Failure: test_invalid_db_write_overflow_cq 00:14:22.703 00:14:22.703 13:40:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:22.703 13:40:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:22.961 [2024-11-06 13:40:16.690872] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 Executing: test_write_invalid_db 00:14:32.933 Waiting for AER completion... 00:14:32.933 Failure: test_write_invalid_db 00:14:32.933 00:14:32.933 Executing: test_invalid_db_write_overflow_sq 00:14:32.933 Waiting for AER completion... 00:14:32.933 Failure: test_invalid_db_write_overflow_sq 00:14:32.933 00:14:32.933 Executing: test_invalid_db_write_overflow_cq 00:14:32.933 Waiting for AER completion... 00:14:32.933 Failure: test_invalid_db_write_overflow_cq 00:14:32.933 00:14:32.933 00:14:32.933 real 0m40.281s 00:14:32.933 user 0m28.415s 00:14:32.933 sys 0m11.459s 00:14:32.933 13:40:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:32.933 13:40:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:14:32.933 ************************************ 00:14:32.933 END TEST nvme_doorbell_aers 00:14:32.933 ************************************ 00:14:32.933 13:40:26 nvme -- nvme/nvme.sh@97 -- # uname 00:14:32.933 13:40:26 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:14:32.933 13:40:26 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:32.933 13:40:26 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:14:32.933 13:40:26 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:32.933 13:40:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.933 ************************************ 00:14:32.933 START TEST nvme_multi_aen 00:14:32.933 ************************************ 00:14:32.933 13:40:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:32.933 [2024-11-06 13:40:26.791199] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.791357] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.791392] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.794057] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.794181] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.794221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.796574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.796886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.796941] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.799098] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.799176] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 [2024-11-06 13:40:26.799210] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64934) is not found. Dropping the request. 00:14:32.933 Child process pid: 65449 00:14:33.502 [Child] Asynchronous Event Request test 00:14:33.502 [Child] Attached to 0000:00:13.0 00:14:33.502 [Child] Attached to 0000:00:10.0 00:14:33.502 [Child] Attached to 0000:00:11.0 00:14:33.502 [Child] Attached to 0000:00:12.0 00:14:33.502 [Child] Registering asynchronous event callbacks... 00:14:33.502 [Child] Getting orig temperature thresholds of all controllers 00:14:33.502 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 [Child] Waiting for all controllers to trigger AER and reset threshold 00:14:33.503 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 [Child] Cleaning up... 00:14:33.503 Asynchronous Event Request test 00:14:33.503 Attached to 0000:00:13.0 00:14:33.503 Attached to 0000:00:10.0 00:14:33.503 Attached to 0000:00:11.0 00:14:33.503 Attached to 0000:00:12.0 00:14:33.503 Reset controller to setup AER completions for this process 00:14:33.503 Registering asynchronous event callbacks... 00:14:33.503 Getting orig temperature thresholds of all controllers 00:14:33.503 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:33.503 Setting all controllers temperature threshold low to trigger AER 00:14:33.503 Waiting for all controllers temperature threshold to be set lower 00:14:33.503 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:33.503 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:33.503 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:33.503 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:33.503 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:33.503 Waiting for all controllers to trigger AER and reset threshold 00:14:33.503 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:33.503 Cleaning up... 00:14:33.503 00:14:33.503 real 0m0.799s 00:14:33.503 user 0m0.315s 00:14:33.503 sys 0m0.363s 00:14:33.503 13:40:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:33.503 13:40:27 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:14:33.503 ************************************ 00:14:33.503 END TEST nvme_multi_aen 00:14:33.503 ************************************ 00:14:33.503 13:40:27 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:33.503 13:40:27 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:33.503 13:40:27 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:33.503 13:40:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.503 ************************************ 00:14:33.503 START TEST nvme_startup 00:14:33.503 ************************************ 00:14:33.503 13:40:27 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:33.760 Initializing NVMe Controllers 00:14:33.760 Attached to 0000:00:13.0 00:14:33.760 Attached to 0000:00:10.0 00:14:33.760 Attached to 0000:00:11.0 00:14:33.760 Attached to 0000:00:12.0 00:14:33.760 Initialization complete. 00:14:33.760 Time used:260255.172 (us). 00:14:33.760 00:14:33.760 real 0m0.382s 00:14:33.760 user 0m0.140s 00:14:33.760 sys 0m0.186s 00:14:33.760 13:40:27 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:33.760 13:40:27 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 ************************************ 00:14:33.760 END TEST nvme_startup 00:14:33.760 ************************************ 00:14:33.760 13:40:27 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:14:33.760 13:40:27 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:33.760 13:40:27 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:33.760 13:40:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.760 ************************************ 00:14:33.760 START TEST nvme_multi_secondary 00:14:33.760 ************************************ 00:14:33.760 13:40:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:14:33.760 13:40:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65511 00:14:33.760 13:40:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65512 00:14:33.760 13:40:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:14:33.760 13:40:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:14:33.760 13:40:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:37.048 Initializing NVMe Controllers 00:14:37.048 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:37.048 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:37.048 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:37.048 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:37.048 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:37.048 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:37.048 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:37.048 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:37.048 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:37.048 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:37.048 Initialization complete. Launching workers. 00:14:37.048 ======================================================== 00:14:37.048 Latency(us) 00:14:37.048 Device Information : IOPS MiB/s Average min max 00:14:37.048 PCIE (0000:00:13.0) NSID 1 from core 1: 5438.76 21.25 2941.42 1083.41 9424.60 00:14:37.048 PCIE (0000:00:10.0) NSID 1 from core 1: 5438.76 21.25 2940.29 1070.00 9656.72 00:14:37.048 PCIE (0000:00:11.0) NSID 1 from core 1: 5438.76 21.25 2941.76 1131.40 9824.68 00:14:37.048 PCIE (0000:00:12.0) NSID 1 from core 1: 5438.76 21.25 2941.87 1129.94 8244.29 00:14:37.048 PCIE (0000:00:12.0) NSID 2 from core 1: 5438.76 21.25 2942.05 1104.33 8435.35 00:14:37.048 PCIE (0000:00:12.0) NSID 3 from core 1: 5438.76 21.25 2942.51 1101.70 9032.07 00:14:37.048 ======================================================== 00:14:37.048 Total : 32632.54 127.47 2941.65 1070.00 9824.68 00:14:37.048 00:14:37.616 Initializing NVMe Controllers 00:14:37.616 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:37.616 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:37.616 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:37.616 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:37.616 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:37.616 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:37.616 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:37.616 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:37.616 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:37.616 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:37.616 Initialization complete. Launching workers. 00:14:37.616 ======================================================== 00:14:37.616 Latency(us) 00:14:37.616 Device Information : IOPS MiB/s Average min max 00:14:37.616 PCIE (0000:00:13.0) NSID 1 from core 2: 2205.50 8.62 7254.17 1941.20 19708.60 00:14:37.616 PCIE (0000:00:10.0) NSID 1 from core 2: 2205.50 8.62 7252.83 2028.16 19463.66 00:14:37.616 PCIE (0000:00:11.0) NSID 1 from core 2: 2205.50 8.62 7254.88 1834.15 23578.25 00:14:37.616 PCIE (0000:00:12.0) NSID 1 from core 2: 2205.50 8.62 7255.20 1900.48 20148.87 00:14:37.616 PCIE (0000:00:12.0) NSID 2 from core 2: 2205.50 8.62 7255.54 2060.68 19415.06 00:14:37.616 PCIE (0000:00:12.0) NSID 3 from core 2: 2205.50 8.62 7265.58 2064.87 19668.75 00:14:37.616 ======================================================== 00:14:37.616 Total : 13233.02 51.69 7256.37 1834.15 23578.25 00:14:37.616 00:14:37.616 13:40:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65511 00:14:39.521 Initializing NVMe Controllers 00:14:39.521 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:39.521 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:39.521 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:39.521 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:39.521 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:39.521 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:39.521 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:39.521 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:39.521 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:39.521 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:39.521 Initialization complete. Launching workers. 00:14:39.521 ======================================================== 00:14:39.521 Latency(us) 00:14:39.521 Device Information : IOPS MiB/s Average min max 00:14:39.521 PCIE (0000:00:13.0) NSID 1 from core 0: 7105.46 27.76 2251.30 1016.45 9795.09 00:14:39.521 PCIE (0000:00:10.0) NSID 1 from core 0: 7105.46 27.76 2250.03 989.52 9831.54 00:14:39.521 PCIE (0000:00:11.0) NSID 1 from core 0: 7105.46 27.76 2251.18 1016.29 10106.09 00:14:39.521 PCIE (0000:00:12.0) NSID 1 from core 0: 7105.46 27.76 2251.12 1004.26 10239.38 00:14:39.521 PCIE (0000:00:12.0) NSID 2 from core 0: 7105.46 27.76 2251.07 1019.50 10593.67 00:14:39.521 PCIE (0000:00:12.0) NSID 3 from core 0: 7105.46 27.76 2251.02 994.44 10988.75 00:14:39.521 ======================================================== 00:14:39.521 Total : 42632.76 166.53 2250.95 989.52 10988.75 00:14:39.521 00:14:39.521 13:40:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65512 00:14:39.521 13:40:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:14:39.521 13:40:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65581 00:14:39.521 13:40:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65582 00:14:39.521 13:40:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:39.521 13:40:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:14:42.850 Initializing NVMe Controllers 00:14:42.850 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:42.850 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:42.850 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:42.850 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:42.850 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:42.850 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:42.850 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:42.850 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:42.850 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:42.850 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:42.850 Initialization complete. Launching workers. 00:14:42.850 ======================================================== 00:14:42.850 Latency(us) 00:14:42.850 Device Information : IOPS MiB/s Average min max 00:14:42.850 PCIE (0000:00:13.0) NSID 1 from core 0: 4746.45 18.54 3370.47 1194.87 7422.25 00:14:42.850 PCIE (0000:00:10.0) NSID 1 from core 0: 4746.45 18.54 3369.37 1190.63 7887.98 00:14:42.850 PCIE (0000:00:11.0) NSID 1 from core 0: 4746.45 18.54 3370.97 1213.14 8436.06 00:14:42.850 PCIE (0000:00:12.0) NSID 1 from core 0: 4746.45 18.54 3370.87 1222.83 8089.40 00:14:42.850 PCIE (0000:00:12.0) NSID 2 from core 0: 4746.45 18.54 3370.83 1213.64 8461.88 00:14:42.850 PCIE (0000:00:12.0) NSID 3 from core 0: 4746.45 18.54 3370.96 1208.76 8813.05 00:14:42.850 ======================================================== 00:14:42.850 Total : 28478.73 111.25 3370.58 1190.63 8813.05 00:14:42.850 00:14:43.108 Initializing NVMe Controllers 00:14:43.108 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:43.108 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:43.108 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:43.108 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:43.108 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:43.108 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:43.108 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:43.108 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:43.108 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:43.108 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:43.108 Initialization complete. Launching workers. 00:14:43.108 ======================================================== 00:14:43.108 Latency(us) 00:14:43.108 Device Information : IOPS MiB/s Average min max 00:14:43.108 PCIE (0000:00:13.0) NSID 1 from core 1: 4824.25 18.84 3316.03 1093.86 8691.87 00:14:43.108 PCIE (0000:00:10.0) NSID 1 from core 1: 4824.25 18.84 3314.38 1041.51 9266.52 00:14:43.108 PCIE (0000:00:11.0) NSID 1 from core 1: 4824.25 18.84 3315.56 1099.97 9543.60 00:14:43.108 PCIE (0000:00:12.0) NSID 1 from core 1: 4824.25 18.84 3315.34 1103.35 9719.97 00:14:43.108 PCIE (0000:00:12.0) NSID 2 from core 1: 4824.25 18.84 3315.13 1070.32 10092.90 00:14:43.108 PCIE (0000:00:12.0) NSID 3 from core 1: 4824.25 18.84 3314.93 1053.98 8226.17 00:14:43.108 ======================================================== 00:14:43.108 Total : 28945.53 113.07 3315.23 1041.51 10092.90 00:14:43.108 00:14:45.014 Initializing NVMe Controllers 00:14:45.014 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:45.014 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:45.014 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:45.014 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:45.014 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:45.014 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:45.014 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:45.014 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:45.014 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:45.014 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:45.014 Initialization complete. Launching workers. 00:14:45.014 ======================================================== 00:14:45.014 Latency(us) 00:14:45.014 Device Information : IOPS MiB/s Average min max 00:14:45.014 PCIE (0000:00:13.0) NSID 1 from core 2: 3118.83 12.18 5129.74 1095.49 18711.14 00:14:45.014 PCIE (0000:00:10.0) NSID 1 from core 2: 3118.83 12.18 5128.34 1071.89 18723.83 00:14:45.014 PCIE (0000:00:11.0) NSID 1 from core 2: 3118.83 12.18 5129.45 1025.68 19139.95 00:14:45.014 PCIE (0000:00:12.0) NSID 1 from core 2: 3118.83 12.18 5129.45 1086.95 19234.80 00:14:45.014 PCIE (0000:00:12.0) NSID 2 from core 2: 3122.03 12.20 5123.82 1104.24 19004.50 00:14:45.014 PCIE (0000:00:12.0) NSID 3 from core 2: 3122.03 12.20 5124.02 1057.33 18321.76 00:14:45.014 ======================================================== 00:14:45.014 Total : 18719.38 73.12 5127.47 1025.68 19234.80 00:14:45.014 00:14:45.014 ************************************ 00:14:45.014 END TEST nvme_multi_secondary 00:14:45.014 ************************************ 00:14:45.014 13:40:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65581 00:14:45.014 13:40:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65582 00:14:45.014 00:14:45.014 real 0m10.961s 00:14:45.014 user 0m18.760s 00:14:45.014 sys 0m1.120s 00:14:45.014 13:40:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:45.014 13:40:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:45.015 13:40:38 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:45.015 13:40:38 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64501 ]] 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@1092 -- # kill 64501 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@1093 -- # wait 64501 00:14:45.015 [2024-11-06 13:40:38.747437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.747582] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.747647] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.747689] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.752847] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.753295] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.753340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.753379] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.757657] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.757725] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.757749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.757776] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.761768] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.761839] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.761863] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.761890] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65448) is not found. Dropping the request. 00:14:45.015 [2024-11-06 13:40:38.923506] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:14:45.015 13:40:38 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:45.015 13:40:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.015 ************************************ 00:14:45.015 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:45.015 ************************************ 00:14:45.015 13:40:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:45.274 * Looking for test storage... 00:14:45.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.275 --rc genhtml_branch_coverage=1 00:14:45.275 --rc genhtml_function_coverage=1 00:14:45.275 --rc genhtml_legend=1 00:14:45.275 --rc geninfo_all_blocks=1 00:14:45.275 --rc geninfo_unexecuted_blocks=1 00:14:45.275 00:14:45.275 ' 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.275 --rc genhtml_branch_coverage=1 00:14:45.275 --rc genhtml_function_coverage=1 00:14:45.275 --rc genhtml_legend=1 00:14:45.275 --rc geninfo_all_blocks=1 00:14:45.275 --rc geninfo_unexecuted_blocks=1 00:14:45.275 00:14:45.275 ' 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.275 --rc genhtml_branch_coverage=1 00:14:45.275 --rc genhtml_function_coverage=1 00:14:45.275 --rc genhtml_legend=1 00:14:45.275 --rc geninfo_all_blocks=1 00:14:45.275 --rc geninfo_unexecuted_blocks=1 00:14:45.275 00:14:45.275 ' 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.275 --rc genhtml_branch_coverage=1 00:14:45.275 --rc genhtml_function_coverage=1 00:14:45.275 --rc genhtml_legend=1 00:14:45.275 --rc geninfo_all_blocks=1 00:14:45.275 --rc geninfo_unexecuted_blocks=1 00:14:45.275 00:14:45.275 ' 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:45.275 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:45.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65744 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65744 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65744 ']' 00:14:45.587 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.588 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:45.588 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.588 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:45.588 13:40:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:45.588 [2024-11-06 13:40:39.445137] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:14:45.588 [2024-11-06 13:40:39.445322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65744 ] 00:14:45.846 [2024-11-06 13:40:39.681630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.104 [2024-11-06 13:40:39.892329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.104 [2024-11-06 13:40:39.892610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.104 [2024-11-06 13:40:39.892827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.104 [2024-11-06 13:40:39.892899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:47.479 nvme0n1 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XPkqm.txt 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:47.479 true 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730900441 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65778 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:47.479 13:40:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:49.382 [2024-11-06 13:40:43.132105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:14:49.382 [2024-11-06 13:40:43.132560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.382 [2024-11-06 13:40:43.132611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.382 [2024-11-06 13:40:43.132630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.382 [2024-11-06 13:40:43.135052] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:14:49.382 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65778 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65778 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65778 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XPkqm.txt 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XPkqm.txt 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65744 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65744 ']' 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65744 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65744 00:14:49.382 killing process with pid 65744 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65744' 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65744 00:14:49.382 13:40:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65744 00:14:52.668 13:40:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:14:52.668 13:40:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:14:52.668 ************************************ 00:14:52.668 END TEST bdev_nvme_reset_stuck_adm_cmd 00:14:52.668 ************************************ 00:14:52.668 00:14:52.668 real 0m7.035s 00:14:52.668 user 0m24.313s 00:14:52.668 sys 0m0.949s 00:14:52.668 13:40:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.668 13:40:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:52.668 13:40:46 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:14:52.668 13:40:46 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:14:52.668 13:40:46 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:52.668 13:40:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.668 13:40:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.668 ************************************ 00:14:52.668 START TEST nvme_fio 00:14:52.668 ************************************ 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:14:52.668 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:52.668 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:52.927 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:52.927 13:40:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:52.927 13:40:46 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:53.185 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:53.185 fio-3.35 00:14:53.185 Starting 1 thread 00:14:56.470 00:14:56.470 test: (groupid=0, jobs=1): err= 0: pid=65930: Wed Nov 6 13:40:50 2024 00:14:56.470 read: IOPS=19.6k, BW=76.5MiB/s (80.3MB/s)(153MiB/2001msec) 00:14:56.470 slat (nsec): min=4300, max=49352, avg=5347.27, stdev=1219.23 00:14:56.470 clat (usec): min=251, max=8738, avg=3254.50, stdev=415.36 00:14:56.470 lat (usec): min=256, max=8788, avg=3259.85, stdev=415.69 00:14:56.470 clat percentiles (usec): 00:14:56.470 | 1.00th=[ 1811], 5.00th=[ 2573], 10.00th=[ 3032], 20.00th=[ 3130], 00:14:56.470 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3261], 60.00th=[ 3294], 00:14:56.470 | 70.00th=[ 3326], 80.00th=[ 3359], 90.00th=[ 3490], 95.00th=[ 4113], 00:14:56.470 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 5407], 99.95th=[ 6915], 00:14:56.470 | 99.99th=[ 8586] 00:14:56.470 bw ( KiB/s): min=73704, max=80720, per=99.49%, avg=77981.33, stdev=3752.56, samples=3 00:14:56.470 iops : min=18426, max=20180, avg=19495.33, stdev=938.14, samples=3 00:14:56.470 write: IOPS=19.6k, BW=76.4MiB/s (80.1MB/s)(153MiB/2001msec); 0 zone resets 00:14:56.470 slat (nsec): min=4378, max=64176, avg=5524.80, stdev=1336.30 00:14:56.470 clat (usec): min=213, max=8619, avg=3256.58, stdev=422.86 00:14:56.470 lat (usec): min=219, max=8638, avg=3262.10, stdev=423.20 00:14:56.470 clat percentiles (usec): 00:14:56.470 | 1.00th=[ 1778], 5.00th=[ 2540], 10.00th=[ 3032], 20.00th=[ 3130], 00:14:56.470 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3261], 60.00th=[ 3294], 00:14:56.470 | 70.00th=[ 3326], 80.00th=[ 3359], 90.00th=[ 3523], 95.00th=[ 4113], 00:14:56.470 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 5800], 99.95th=[ 7177], 00:14:56.470 | 99.99th=[ 8455] 00:14:56.470 bw ( KiB/s): min=73696, max=81016, per=99.88%, avg=78141.33, stdev=3904.59, samples=3 00:14:56.470 iops : min=18424, max=20254, avg=19535.33, stdev=976.15, samples=3 00:14:56.470 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04% 00:14:56.470 lat (msec) : 2=1.58%, 4=92.25%, 10=6.10% 00:14:56.470 cpu : usr=99.15%, sys=0.20%, ctx=4, majf=0, minf=607 00:14:56.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:56.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.470 issued rwts: total=39208,39136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.470 00:14:56.470 Run status group 0 (all jobs): 00:14:56.470 READ: bw=76.5MiB/s (80.3MB/s), 76.5MiB/s-76.5MiB/s (80.3MB/s-80.3MB/s), io=153MiB (161MB), run=2001-2001msec 00:14:56.470 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=153MiB (160MB), run=2001-2001msec 00:14:56.729 ----------------------------------------------------- 00:14:56.729 Suppressions used: 00:14:56.729 count bytes template 00:14:56.729 1 32 /usr/src/fio/parse.c 00:14:56.729 1 8 libtcmalloc_minimal.so 00:14:56.729 ----------------------------------------------------- 00:14:56.729 00:14:56.729 13:40:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:56.729 13:40:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:56.729 13:40:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:56.729 13:40:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:57.036 13:40:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:57.036 13:40:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:57.324 13:40:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:57.324 13:40:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:14:57.324 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:57.325 13:40:51 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:57.583 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:57.583 fio-3.35 00:14:57.583 Starting 1 thread 00:15:00.867 00:15:00.867 test: (groupid=0, jobs=1): err= 0: pid=66002: Wed Nov 6 13:40:54 2024 00:15:00.867 read: IOPS=18.1k, BW=70.7MiB/s (74.2MB/s)(142MiB/2001msec) 00:15:00.867 slat (nsec): min=4409, max=68719, avg=5704.36, stdev=1684.66 00:15:00.867 clat (usec): min=213, max=8674, avg=3517.58, stdev=595.24 00:15:00.867 lat (usec): min=218, max=8679, avg=3523.29, stdev=596.08 00:15:00.867 clat percentiles (usec): 00:15:00.867 | 1.00th=[ 2966], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3130], 00:15:00.867 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3261], 60.00th=[ 3326], 00:15:00.867 | 70.00th=[ 3851], 80.00th=[ 4047], 90.00th=[ 4178], 95.00th=[ 4293], 00:15:00.867 | 99.00th=[ 5735], 99.50th=[ 7242], 99.90th=[ 8160], 99.95th=[ 8356], 00:15:00.867 | 99.99th=[ 8455] 00:15:00.867 bw ( KiB/s): min=63608, max=78912, per=96.74%, avg=70066.67, stdev=7926.24, samples=3 00:15:00.867 iops : min=15902, max=19728, avg=17516.67, stdev=1981.56, samples=3 00:15:00.867 write: IOPS=18.1k, BW=70.8MiB/s (74.3MB/s)(142MiB/2001msec); 0 zone resets 00:15:00.867 slat (nsec): min=4549, max=66215, avg=5861.01, stdev=1627.56 00:15:00.867 clat (usec): min=239, max=8657, avg=3523.76, stdev=592.66 00:15:00.867 lat (usec): min=244, max=8664, avg=3529.62, stdev=593.47 00:15:00.867 clat percentiles (usec): 00:15:00.867 | 1.00th=[ 2966], 5.00th=[ 3064], 10.00th=[ 3097], 20.00th=[ 3130], 00:15:00.867 | 30.00th=[ 3163], 40.00th=[ 3228], 50.00th=[ 3261], 60.00th=[ 3326], 00:15:00.867 | 70.00th=[ 3851], 80.00th=[ 4047], 90.00th=[ 4178], 95.00th=[ 4293], 00:15:00.867 | 99.00th=[ 5800], 99.50th=[ 7046], 99.90th=[ 8160], 99.95th=[ 8356], 00:15:00.867 | 99.99th=[ 8586] 00:15:00.867 bw ( KiB/s): min=63504, max=78880, per=96.61%, avg=70074.67, stdev=7927.84, samples=3 00:15:00.867 iops : min=15876, max=19720, avg=17518.67, stdev=1981.96, samples=3 00:15:00.867 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:15:00.867 lat (msec) : 2=0.06%, 4=77.39%, 10=22.51% 00:15:00.867 cpu : usr=99.10%, sys=0.15%, ctx=5, majf=0, minf=607 00:15:00.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:00.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:00.867 issued rwts: total=36233,36284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:00.867 00:15:00.867 Run status group 0 (all jobs): 00:15:00.867 READ: bw=70.7MiB/s (74.2MB/s), 70.7MiB/s-70.7MiB/s (74.2MB/s-74.2MB/s), io=142MiB (148MB), run=2001-2001msec 00:15:00.867 WRITE: bw=70.8MiB/s (74.3MB/s), 70.8MiB/s-70.8MiB/s (74.3MB/s-74.3MB/s), io=142MiB (149MB), run=2001-2001msec 00:15:01.435 ----------------------------------------------------- 00:15:01.435 Suppressions used: 00:15:01.435 count bytes template 00:15:01.435 1 32 /usr/src/fio/parse.c 00:15:01.435 1 8 libtcmalloc_minimal.so 00:15:01.435 ----------------------------------------------------- 00:15:01.435 00:15:01.435 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:01.435 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:01.435 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:01.435 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:01.694 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:01.694 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:01.952 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:01.952 13:40:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:01.952 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:01.952 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:01.952 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.952 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:01.952 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.952 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:01.953 13:40:55 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:02.211 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:02.211 fio-3.35 00:15:02.211 Starting 1 thread 00:15:05.492 00:15:05.492 test: (groupid=0, jobs=1): err= 0: pid=66069: Wed Nov 6 13:40:59 2024 00:15:05.492 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec) 00:15:05.492 slat (usec): min=4, max=317, avg= 5.75, stdev= 3.08 00:15:05.492 clat (usec): min=291, max=9301, avg=3563.42, stdev=667.22 00:15:05.492 lat (usec): min=297, max=9306, avg=3569.16, stdev=668.03 00:15:05.492 clat percentiles (usec): 00:15:05.492 | 1.00th=[ 2638], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3130], 00:15:05.492 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3294], 60.00th=[ 3425], 00:15:05.492 | 70.00th=[ 3818], 80.00th=[ 4047], 90.00th=[ 4293], 95.00th=[ 4555], 00:15:05.492 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 8848], 99.95th=[ 8979], 00:15:05.492 | 99.99th=[ 9241] 00:15:05.492 bw ( KiB/s): min=68720, max=79064, per=100.00%, avg=72469.67, stdev=5728.76, samples=3 00:15:05.492 iops : min=17180, max=19766, avg=18117.33, stdev=1432.25, samples=3 00:15:05.492 write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec); 0 zone resets 00:15:05.492 slat (usec): min=4, max=451, avg= 5.95, stdev= 3.88 00:15:05.492 clat (usec): min=328, max=9262, avg=3563.87, stdev=665.95 00:15:05.492 lat (usec): min=335, max=9273, avg=3569.82, stdev=666.74 00:15:05.492 clat percentiles (usec): 00:15:05.492 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3130], 00:15:05.492 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3294], 60.00th=[ 3425], 00:15:05.492 | 70.00th=[ 3818], 80.00th=[ 4047], 90.00th=[ 4293], 95.00th=[ 4621], 00:15:05.492 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 8848], 99.95th=[ 8979], 00:15:05.492 | 99.99th=[ 9241] 00:15:05.492 bw ( KiB/s): min=68440, max=79160, per=100.00%, avg=72397.67, stdev=5884.66, samples=3 00:15:05.492 iops : min=17110, max=19790, avg=18099.33, stdev=1471.22, samples=3 00:15:05.492 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:05.492 lat (msec) : 2=0.15%, 4=76.30%, 10=23.52% 00:15:05.492 cpu : usr=98.35%, sys=0.50%, ctx=19, majf=0, minf=608 00:15:05.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:05.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.493 issued rwts: total=35810,35811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.493 00:15:05.493 Run status group 0 (all jobs): 00:15:05.493 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:15:05.493 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:15:06.061 ----------------------------------------------------- 00:15:06.061 Suppressions used: 00:15:06.061 count bytes template 00:15:06.061 1 32 /usr/src/fio/parse.c 00:15:06.061 1 8 libtcmalloc_minimal.so 00:15:06.061 ----------------------------------------------------- 00:15:06.061 00:15:06.061 13:40:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:06.061 13:40:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:06.061 13:40:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:06.061 13:40:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:06.319 13:41:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:06.319 13:41:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:06.578 13:41:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:06.578 13:41:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:06.578 13:41:00 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:06.837 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:06.837 fio-3.35 00:15:06.837 Starting 1 thread 00:15:12.145 00:15:12.145 test: (groupid=0, jobs=1): err= 0: pid=66135: Wed Nov 6 13:41:05 2024 00:15:12.145 read: IOPS=17.6k, BW=68.7MiB/s (72.1MB/s)(138MiB/2001msec) 00:15:12.145 slat (nsec): min=4482, max=55580, avg=5857.31, stdev=1705.82 00:15:12.145 clat (usec): min=367, max=11311, avg=3616.26, stdev=667.53 00:15:12.145 lat (usec): min=372, max=11367, avg=3622.12, stdev=668.38 00:15:12.145 clat percentiles (usec): 00:15:12.145 | 1.00th=[ 2671], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3163], 00:15:12.145 | 30.00th=[ 3195], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3392], 00:15:12.145 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4424], 00:15:12.145 | 99.00th=[ 5800], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[ 9503], 00:15:12.145 | 99.99th=[11076] 00:15:12.145 bw ( KiB/s): min=60864, max=73096, per=96.50%, avg=67917.33, stdev=6327.81, samples=3 00:15:12.145 iops : min=15216, max=18274, avg=16979.33, stdev=1581.95, samples=3 00:15:12.145 write: IOPS=17.6k, BW=68.8MiB/s (72.1MB/s)(138MiB/2001msec); 0 zone resets 00:15:12.145 slat (usec): min=4, max=103, avg= 6.06, stdev= 1.82 00:15:12.145 clat (usec): min=222, max=11171, avg=3622.17, stdev=678.51 00:15:12.145 lat (usec): min=227, max=11192, avg=3628.23, stdev=679.39 00:15:12.145 clat percentiles (usec): 00:15:12.145 | 1.00th=[ 2606], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3163], 00:15:12.145 | 30.00th=[ 3195], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3392], 00:15:12.145 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4424], 00:15:12.145 | 99.00th=[ 5866], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 9765], 00:15:12.145 | 99.99th=[10945] 00:15:12.145 bw ( KiB/s): min=60896, max=72640, per=96.20%, avg=67765.33, stdev=6120.82, samples=3 00:15:12.145 iops : min=15224, max=18160, avg=16941.33, stdev=1530.20, samples=3 00:15:12.145 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:15:12.145 lat (msec) : 2=0.39%, 4=65.88%, 10=33.65%, 20=0.04% 00:15:12.145 cpu : usr=99.05%, sys=0.15%, ctx=4, majf=0, minf=605 00:15:12.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:12.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.146 issued rwts: total=35206,35239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.146 00:15:12.146 Run status group 0 (all jobs): 00:15:12.146 READ: bw=68.7MiB/s (72.1MB/s), 68.7MiB/s-68.7MiB/s (72.1MB/s-72.1MB/s), io=138MiB (144MB), run=2001-2001msec 00:15:12.146 WRITE: bw=68.8MiB/s (72.1MB/s), 68.8MiB/s-68.8MiB/s (72.1MB/s-72.1MB/s), io=138MiB (144MB), run=2001-2001msec 00:15:12.146 ----------------------------------------------------- 00:15:12.146 Suppressions used: 00:15:12.146 count bytes template 00:15:12.146 1 32 /usr/src/fio/parse.c 00:15:12.146 1 8 libtcmalloc_minimal.so 00:15:12.146 ----------------------------------------------------- 00:15:12.146 00:15:12.146 ************************************ 00:15:12.146 END TEST nvme_fio 00:15:12.146 ************************************ 00:15:12.146 13:41:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:12.146 13:41:05 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:12.146 00:15:12.146 real 0m19.468s 00:15:12.146 user 0m14.824s 00:15:12.146 sys 0m4.597s 00:15:12.146 13:41:05 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:12.146 13:41:05 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:12.146 ************************************ 00:15:12.146 END TEST nvme 00:15:12.146 ************************************ 00:15:12.146 00:15:12.146 real 1m36.308s 00:15:12.146 user 3m48.347s 00:15:12.146 sys 0m24.396s 00:15:12.146 13:41:05 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:12.146 13:41:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.146 13:41:05 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:12.146 13:41:05 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:12.146 13:41:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:12.146 13:41:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:12.146 13:41:05 -- common/autotest_common.sh@10 -- # set +x 00:15:12.146 ************************************ 00:15:12.146 START TEST nvme_scc 00:15:12.146 ************************************ 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:12.146 * Looking for test storage... 00:15:12.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:12.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.146 --rc genhtml_branch_coverage=1 00:15:12.146 --rc genhtml_function_coverage=1 00:15:12.146 --rc genhtml_legend=1 00:15:12.146 --rc geninfo_all_blocks=1 00:15:12.146 --rc geninfo_unexecuted_blocks=1 00:15:12.146 00:15:12.146 ' 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:12.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.146 --rc genhtml_branch_coverage=1 00:15:12.146 --rc genhtml_function_coverage=1 00:15:12.146 --rc genhtml_legend=1 00:15:12.146 --rc geninfo_all_blocks=1 00:15:12.146 --rc geninfo_unexecuted_blocks=1 00:15:12.146 00:15:12.146 ' 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:12.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.146 --rc genhtml_branch_coverage=1 00:15:12.146 --rc genhtml_function_coverage=1 00:15:12.146 --rc genhtml_legend=1 00:15:12.146 --rc geninfo_all_blocks=1 00:15:12.146 --rc geninfo_unexecuted_blocks=1 00:15:12.146 00:15:12.146 ' 00:15:12.146 13:41:05 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:12.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.146 --rc genhtml_branch_coverage=1 00:15:12.146 --rc genhtml_function_coverage=1 00:15:12.146 --rc genhtml_legend=1 00:15:12.146 --rc geninfo_all_blocks=1 00:15:12.146 --rc geninfo_unexecuted_blocks=1 00:15:12.146 00:15:12.146 ' 00:15:12.146 13:41:05 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.146 13:41:05 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.146 13:41:05 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.146 13:41:05 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.146 13:41:05 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.146 13:41:05 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:12.146 13:41:05 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:12.146 13:41:05 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:12.146 13:41:05 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.146 13:41:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:12.146 13:41:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:12.146 13:41:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:12.146 13:41:05 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:12.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:12.663 Waiting for block devices as requested 00:15:12.663 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:12.920 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:12.920 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:12.920 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:18.188 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:18.188 13:41:11 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:18.188 13:41:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:18.188 13:41:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:18.188 13:41:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:18.188 13:41:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.188 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.189 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.190 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:18.191 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:18.192 13:41:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:18.192 13:41:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:18.192 13:41:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:18.192 13:41:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:18.192 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:18.193 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.457 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.458 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:18.459 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:18.460 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:18.461 13:41:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:18.461 13:41:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:18.461 13:41:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:18.461 13:41:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:18.461 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.462 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.463 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.464 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.465 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:18.466 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:18.467 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.729 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.730 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:18.731 13:41:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:18.731 13:41:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:18.731 13:41:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:18.731 13:41:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.731 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.732 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:18.733 13:41:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:18.734 13:41:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:15:18.734 13:41:12 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:15:18.734 13:41:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:15:18.734 13:41:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:15:18.734 13:41:12 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:19.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.265 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:20.265 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:20.265 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:20.265 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:20.265 13:41:14 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:20.265 13:41:14 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:20.265 13:41:14 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:20.265 13:41:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:20.265 ************************************ 00:15:20.265 START TEST nvme_simple_copy 00:15:20.265 ************************************ 00:15:20.265 13:41:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:20.524 Initializing NVMe Controllers 00:15:20.524 Attaching to 0000:00:10.0 00:15:20.524 Controller supports SCC. Attached to 0000:00:10.0 00:15:20.524 Namespace ID: 1 size: 6GB 00:15:20.524 Initialization complete. 00:15:20.524 00:15:20.524 Controller QEMU NVMe Ctrl (12340 ) 00:15:20.524 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:15:20.524 Namespace Block Size:4096 00:15:20.524 Writing LBAs 0 to 63 with Random Data 00:15:20.524 Copied LBAs from 0 - 63 to the Destination LBA 256 00:15:20.524 LBAs matching Written Data: 64 00:15:20.524 00:15:20.524 real 0m0.340s 00:15:20.524 user 0m0.140s 00:15:20.524 sys 0m0.099s 00:15:20.524 ************************************ 00:15:20.524 END TEST nvme_simple_copy 00:15:20.524 ************************************ 00:15:20.524 13:41:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:20.524 13:41:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:15:20.784 ************************************ 00:15:20.784 END TEST nvme_scc 00:15:20.784 ************************************ 00:15:20.784 00:15:20.784 real 0m8.898s 00:15:20.784 user 0m1.562s 00:15:20.784 sys 0m2.258s 00:15:20.784 13:41:14 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:20.784 13:41:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:20.784 13:41:14 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:15:20.784 13:41:14 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:15:20.784 13:41:14 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:15:20.784 13:41:14 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:15:20.784 13:41:14 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:15:20.784 13:41:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:20.784 13:41:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:20.784 13:41:14 -- common/autotest_common.sh@10 -- # set +x 00:15:20.784 ************************************ 00:15:20.784 START TEST nvme_fdp 00:15:20.784 ************************************ 00:15:20.784 13:41:14 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:15:20.784 * Looking for test storage... 00:15:20.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:20.784 13:41:14 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:20.784 13:41:14 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:15:20.784 13:41:14 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:21.044 13:41:14 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:15:21.044 13:41:14 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.044 13:41:14 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:21.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.044 --rc genhtml_branch_coverage=1 00:15:21.044 --rc genhtml_function_coverage=1 00:15:21.044 --rc genhtml_legend=1 00:15:21.044 --rc geninfo_all_blocks=1 00:15:21.044 --rc geninfo_unexecuted_blocks=1 00:15:21.044 00:15:21.044 ' 00:15:21.044 13:41:14 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:21.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.044 --rc genhtml_branch_coverage=1 00:15:21.044 --rc genhtml_function_coverage=1 00:15:21.044 --rc genhtml_legend=1 00:15:21.044 --rc geninfo_all_blocks=1 00:15:21.044 --rc geninfo_unexecuted_blocks=1 00:15:21.044 00:15:21.044 ' 00:15:21.044 13:41:14 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:21.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.044 --rc genhtml_branch_coverage=1 00:15:21.044 --rc genhtml_function_coverage=1 00:15:21.044 --rc genhtml_legend=1 00:15:21.044 --rc geninfo_all_blocks=1 00:15:21.044 --rc geninfo_unexecuted_blocks=1 00:15:21.044 00:15:21.044 ' 00:15:21.044 13:41:14 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:21.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.044 --rc genhtml_branch_coverage=1 00:15:21.044 --rc genhtml_function_coverage=1 00:15:21.044 --rc genhtml_legend=1 00:15:21.044 --rc geninfo_all_blocks=1 00:15:21.044 --rc geninfo_unexecuted_blocks=1 00:15:21.044 00:15:21.044 ' 00:15:21.044 13:41:14 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:21.044 13:41:14 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:21.044 13:41:14 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:21.044 13:41:14 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:21.044 13:41:14 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.044 13:41:14 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.044 13:41:14 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.045 13:41:14 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.045 13:41:14 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.045 13:41:14 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:15:21.045 13:41:14 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:21.045 13:41:14 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:15:21.045 13:41:14 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.045 13:41:14 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:21.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:21.562 Waiting for block devices as requested 00:15:21.562 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:21.820 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:21.820 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.078 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:27.354 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:27.354 13:41:20 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:27.354 13:41:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:27.354 13:41:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:27.354 13:41:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:27.354 13:41:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:27.354 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.355 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:27.356 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.357 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:27.358 13:41:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.358 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:27.359 13:41:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:27.359 13:41:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:27.359 13:41:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:27.359 13:41:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.359 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:27.360 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.361 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.362 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.363 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:27.364 13:41:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:27.364 13:41:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:27.364 13:41:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:27.364 13:41:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.364 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:27.365 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:27.366 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.367 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.368 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.369 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:27.370 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.632 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.633 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:27.634 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:27.635 13:41:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:27.635 13:41:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:27.635 13:41:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:27.635 13:41:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.635 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.636 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.637 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:27.638 13:41:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:15:27.638 13:41:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:15:27.638 13:41:21 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:15:27.638 13:41:21 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:28.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:29.142 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:29.142 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:29.142 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:29.142 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:29.142 13:41:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:29.142 13:41:22 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:29.142 13:41:22 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:29.142 13:41:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:29.142 ************************************ 00:15:29.142 START TEST nvme_flexible_data_placement 00:15:29.142 ************************************ 00:15:29.142 13:41:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:29.401 Initializing NVMe Controllers 00:15:29.401 Attaching to 0000:00:13.0 00:15:29.401 Controller supports FDP Attached to 0000:00:13.0 00:15:29.401 Namespace ID: 1 Endurance Group ID: 1 00:15:29.401 Initialization complete. 00:15:29.401 00:15:29.401 ================================== 00:15:29.401 == FDP tests for Namespace: #01 == 00:15:29.401 ================================== 00:15:29.401 00:15:29.401 Get Feature: FDP: 00:15:29.401 ================= 00:15:29.401 Enabled: Yes 00:15:29.401 FDP configuration Index: 0 00:15:29.401 00:15:29.401 FDP configurations log page 00:15:29.401 =========================== 00:15:29.401 Number of FDP configurations: 1 00:15:29.401 Version: 0 00:15:29.401 Size: 112 00:15:29.401 FDP Configuration Descriptor: 0 00:15:29.401 Descriptor Size: 96 00:15:29.401 Reclaim Group Identifier format: 2 00:15:29.401 FDP Volatile Write Cache: Not Present 00:15:29.401 FDP Configuration: Valid 00:15:29.401 Vendor Specific Size: 0 00:15:29.401 Number of Reclaim Groups: 2 00:15:29.401 Number of Recalim Unit Handles: 8 00:15:29.401 Max Placement Identifiers: 128 00:15:29.401 Number of Namespaces Suppprted: 256 00:15:29.401 Reclaim unit Nominal Size: 6000000 bytes 00:15:29.401 Estimated Reclaim Unit Time Limit: Not Reported 00:15:29.401 RUH Desc #000: RUH Type: Initially Isolated 00:15:29.401 RUH Desc #001: RUH Type: Initially Isolated 00:15:29.401 RUH Desc #002: RUH Type: Initially Isolated 00:15:29.401 RUH Desc #003: RUH Type: Initially Isolated 00:15:29.401 RUH Desc #004: RUH Type: Initially Isolated 00:15:29.401 RUH Desc #005: RUH Type: Initially Isolated 00:15:29.401 RUH Desc #006: RUH Type: Initially Isolated 00:15:29.401 RUH Desc #007: RUH Type: Initially Isolated 00:15:29.401 00:15:29.401 FDP reclaim unit handle usage log page 00:15:29.401 ====================================== 00:15:29.401 Number of Reclaim Unit Handles: 8 00:15:29.401 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:29.401 RUH Usage Desc #001: RUH Attributes: Unused 00:15:29.401 RUH Usage Desc #002: RUH Attributes: Unused 00:15:29.401 RUH Usage Desc #003: RUH Attributes: Unused 00:15:29.401 RUH Usage Desc #004: RUH Attributes: Unused 00:15:29.401 RUH Usage Desc #005: RUH Attributes: Unused 00:15:29.401 RUH Usage Desc #006: RUH Attributes: Unused 00:15:29.401 RUH Usage Desc #007: RUH Attributes: Unused 00:15:29.401 00:15:29.401 FDP statistics log page 00:15:29.401 ======================= 00:15:29.401 Host bytes with metadata written: 772218880 00:15:29.401 Media bytes with metadata written: 772358144 00:15:29.401 Media bytes erased: 0 00:15:29.401 00:15:29.401 FDP Reclaim unit handle status 00:15:29.401 ============================== 00:15:29.401 Number of RUHS descriptors: 2 00:15:29.401 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001f8e 00:15:29.401 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:15:29.401 00:15:29.401 FDP write on placement id: 0 success 00:15:29.401 00:15:29.402 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:15:29.402 00:15:29.402 IO mgmt send: RUH update for Placement ID: #0 Success 00:15:29.402 00:15:29.402 Get Feature: FDP Events for Placement handle: #0 00:15:29.402 ======================== 00:15:29.402 Number of FDP Events: 6 00:15:29.402 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:15:29.402 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:15:29.402 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:15:29.402 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:15:29.402 FDP Event: #4 Type: Media Reallocated Enabled: No 00:15:29.402 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:15:29.402 00:15:29.402 FDP events log page 00:15:29.402 =================== 00:15:29.402 Number of FDP events: 1 00:15:29.402 FDP Event #0: 00:15:29.402 Event Type: RU Not Written to Capacity 00:15:29.402 Placement Identifier: Valid 00:15:29.402 NSID: Valid 00:15:29.402 Location: Valid 00:15:29.402 Placement Identifier: 0 00:15:29.402 Event Timestamp: 8 00:15:29.402 Namespace Identifier: 1 00:15:29.402 Reclaim Group Identifier: 0 00:15:29.402 Reclaim Unit Handle Identifier: 0 00:15:29.402 00:15:29.402 FDP test passed 00:15:29.402 00:15:29.402 real 0m0.309s 00:15:29.402 user 0m0.098s 00:15:29.402 sys 0m0.110s 00:15:29.402 13:41:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:29.402 13:41:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:15:29.402 ************************************ 00:15:29.402 END TEST nvme_flexible_data_placement 00:15:29.402 ************************************ 00:15:29.402 00:15:29.402 real 0m8.764s 00:15:29.402 user 0m1.503s 00:15:29.402 sys 0m2.203s 00:15:29.402 13:41:23 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:29.402 13:41:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:29.402 ************************************ 00:15:29.402 END TEST nvme_fdp 00:15:29.402 ************************************ 00:15:29.661 13:41:23 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:15:29.661 13:41:23 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:29.661 13:41:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:29.661 13:41:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:29.661 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:15:29.661 ************************************ 00:15:29.661 START TEST nvme_rpc 00:15:29.661 ************************************ 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:29.661 * Looking for test storage... 00:15:29.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.661 13:41:23 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:29.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.661 --rc genhtml_branch_coverage=1 00:15:29.661 --rc genhtml_function_coverage=1 00:15:29.661 --rc genhtml_legend=1 00:15:29.661 --rc geninfo_all_blocks=1 00:15:29.661 --rc geninfo_unexecuted_blocks=1 00:15:29.661 00:15:29.661 ' 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:29.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.661 --rc genhtml_branch_coverage=1 00:15:29.661 --rc genhtml_function_coverage=1 00:15:29.661 --rc genhtml_legend=1 00:15:29.661 --rc geninfo_all_blocks=1 00:15:29.661 --rc geninfo_unexecuted_blocks=1 00:15:29.661 00:15:29.661 ' 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:29.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.661 --rc genhtml_branch_coverage=1 00:15:29.661 --rc genhtml_function_coverage=1 00:15:29.661 --rc genhtml_legend=1 00:15:29.661 --rc geninfo_all_blocks=1 00:15:29.661 --rc geninfo_unexecuted_blocks=1 00:15:29.661 00:15:29.661 ' 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:29.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.661 --rc genhtml_branch_coverage=1 00:15:29.661 --rc genhtml_function_coverage=1 00:15:29.661 --rc genhtml_legend=1 00:15:29.661 --rc geninfo_all_blocks=1 00:15:29.661 --rc geninfo_unexecuted_blocks=1 00:15:29.661 00:15:29.661 ' 00:15:29.661 13:41:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.661 13:41:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:15:29.661 13:41:23 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:15:29.662 13:41:23 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:29.662 13:41:23 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:29.662 13:41:23 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:15:29.920 13:41:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:15:29.920 13:41:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67519 00:15:29.920 13:41:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:29.920 13:41:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:29.920 13:41:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67519 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67519 ']' 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:29.920 13:41:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.921 [2024-11-06 13:41:23.859458] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:15:29.921 [2024-11-06 13:41:23.859634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67519 ] 00:15:30.179 [2024-11-06 13:41:24.050955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:30.438 [2024-11-06 13:41:24.209464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.438 [2024-11-06 13:41:24.209492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.375 13:41:25 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:31.375 13:41:25 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:31.375 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:15:31.634 Nvme0n1 00:15:31.634 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:31.634 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:31.893 request: 00:15:31.893 { 00:15:31.893 "bdev_name": "Nvme0n1", 00:15:31.893 "filename": "non_existing_file", 00:15:31.893 "method": "bdev_nvme_apply_firmware", 00:15:31.893 "req_id": 1 00:15:31.893 } 00:15:31.893 Got JSON-RPC error response 00:15:31.893 response: 00:15:31.893 { 00:15:31.893 "code": -32603, 00:15:31.893 "message": "open file failed." 00:15:31.893 } 00:15:31.893 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:31.893 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:31.893 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:32.201 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:32.201 13:41:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67519 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67519 ']' 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67519 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67519 00:15:32.202 killing process with pid 67519 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67519' 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67519 00:15:32.202 13:41:25 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67519 00:15:34.737 00:15:34.737 real 0m4.945s 00:15:34.737 user 0m9.143s 00:15:34.737 sys 0m0.793s 00:15:34.737 13:41:28 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:34.737 ************************************ 00:15:34.737 END TEST nvme_rpc 00:15:34.737 ************************************ 00:15:34.737 13:41:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.737 13:41:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:34.737 13:41:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:34.737 13:41:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:34.737 13:41:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.737 ************************************ 00:15:34.737 START TEST nvme_rpc_timeouts 00:15:34.737 ************************************ 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:34.737 * Looking for test storage... 00:15:34.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:15:34.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.737 13:41:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:34.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.737 --rc genhtml_branch_coverage=1 00:15:34.737 --rc genhtml_function_coverage=1 00:15:34.737 --rc genhtml_legend=1 00:15:34.737 --rc geninfo_all_blocks=1 00:15:34.737 --rc geninfo_unexecuted_blocks=1 00:15:34.737 00:15:34.737 ' 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:34.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.737 --rc genhtml_branch_coverage=1 00:15:34.737 --rc genhtml_function_coverage=1 00:15:34.737 --rc genhtml_legend=1 00:15:34.737 --rc geninfo_all_blocks=1 00:15:34.737 --rc geninfo_unexecuted_blocks=1 00:15:34.737 00:15:34.737 ' 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:34.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.737 --rc genhtml_branch_coverage=1 00:15:34.737 --rc genhtml_function_coverage=1 00:15:34.737 --rc genhtml_legend=1 00:15:34.737 --rc geninfo_all_blocks=1 00:15:34.737 --rc geninfo_unexecuted_blocks=1 00:15:34.737 00:15:34.737 ' 00:15:34.737 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:34.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.737 --rc genhtml_branch_coverage=1 00:15:34.737 --rc genhtml_function_coverage=1 00:15:34.737 --rc genhtml_legend=1 00:15:34.737 --rc geninfo_all_blocks=1 00:15:34.737 --rc geninfo_unexecuted_blocks=1 00:15:34.737 00:15:34.737 ' 00:15:34.737 13:41:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.737 13:41:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67601 00:15:34.737 13:41:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67601 00:15:34.737 13:41:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67634 00:15:34.738 13:41:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:34.738 13:41:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:34.738 13:41:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67634 00:15:34.738 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67634 ']' 00:15:34.738 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.738 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:34.738 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.738 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:34.738 13:41:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:34.996 [2024-11-06 13:41:28.789478] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:15:34.996 [2024-11-06 13:41:28.789911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67634 ] 00:15:35.255 [2024-11-06 13:41:28.983778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:35.255 [2024-11-06 13:41:29.113722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.255 [2024-11-06 13:41:29.113743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.190 13:41:30 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.190 13:41:30 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:15:36.190 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:36.190 Checking default timeout settings: 00:15:36.190 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:36.448 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:36.448 Making settings changes with rpc: 00:15:36.448 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:36.707 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:36.707 Check default vs. modified settings: 00:15:36.707 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:37.274 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:37.274 13:41:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67601 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67601 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:37.274 Setting action_on_timeout is changed as expected. 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67601 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67601 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:37.274 Setting timeout_us is changed as expected. 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67601 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67601 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:37.274 Setting timeout_admin_us is changed as expected. 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67601 /tmp/settings_modified_67601 00:15:37.274 13:41:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67634 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67634 ']' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67634 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67634 00:15:37.274 killing process with pid 67634 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67634' 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67634 00:15:37.274 13:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67634 00:15:39.826 RPC TIMEOUT SETTING TEST PASSED. 00:15:39.826 13:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:39.826 00:15:39.826 real 0m5.360s 00:15:39.826 user 0m10.188s 00:15:39.826 sys 0m0.821s 00:15:39.826 ************************************ 00:15:39.826 END TEST nvme_rpc_timeouts 00:15:39.826 ************************************ 00:15:39.826 13:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:39.826 13:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:40.110 13:41:33 -- spdk/autotest.sh@239 -- # uname -s 00:15:40.110 13:41:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:15:40.110 13:41:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:40.110 13:41:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:40.110 13:41:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.110 13:41:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.110 ************************************ 00:15:40.110 START TEST sw_hotplug 00:15:40.110 ************************************ 00:15:40.110 13:41:33 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:40.110 * Looking for test storage... 00:15:40.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:40.110 13:41:33 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:40.110 13:41:33 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:15:40.110 13:41:33 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:40.110 13:41:34 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.110 13:41:34 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:15:40.110 13:41:34 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.110 13:41:34 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:40.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.110 --rc genhtml_branch_coverage=1 00:15:40.110 --rc genhtml_function_coverage=1 00:15:40.110 --rc genhtml_legend=1 00:15:40.110 --rc geninfo_all_blocks=1 00:15:40.110 --rc geninfo_unexecuted_blocks=1 00:15:40.110 00:15:40.110 ' 00:15:40.110 13:41:34 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:40.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.110 --rc genhtml_branch_coverage=1 00:15:40.110 --rc genhtml_function_coverage=1 00:15:40.110 --rc genhtml_legend=1 00:15:40.110 --rc geninfo_all_blocks=1 00:15:40.110 --rc geninfo_unexecuted_blocks=1 00:15:40.110 00:15:40.110 ' 00:15:40.110 13:41:34 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:40.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.110 --rc genhtml_branch_coverage=1 00:15:40.110 --rc genhtml_function_coverage=1 00:15:40.110 --rc genhtml_legend=1 00:15:40.110 --rc geninfo_all_blocks=1 00:15:40.110 --rc geninfo_unexecuted_blocks=1 00:15:40.110 00:15:40.110 ' 00:15:40.110 13:41:34 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:40.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.111 --rc genhtml_branch_coverage=1 00:15:40.111 --rc genhtml_function_coverage=1 00:15:40.111 --rc genhtml_legend=1 00:15:40.111 --rc geninfo_all_blocks=1 00:15:40.111 --rc geninfo_unexecuted_blocks=1 00:15:40.111 00:15:40.111 ' 00:15:40.111 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:40.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:40.936 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.936 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.936 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.936 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:40.936 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:15:40.936 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:15:40.936 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:15:40.936 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@233 -- # local class 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:40.936 13:41:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:15:40.937 13:41:34 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:40.937 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:15:40.937 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:40.937 13:41:34 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:41.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:41.502 Waiting for block devices as requested 00:15:41.760 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:41.760 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:41.760 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:42.017 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:47.276 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:47.276 13:41:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:15:47.276 13:41:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:47.534 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:15:47.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:47.534 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:15:48.101 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:15:48.360 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.360 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:15:48.360 13:41:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68522 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:15:48.360 13:41:42 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:15:48.360 13:41:42 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:15:48.360 13:41:42 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:15:48.360 13:41:42 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:15:48.360 13:41:42 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:48.360 13:41:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:48.619 Initializing NVMe Controllers 00:15:48.619 Attaching to 0000:00:10.0 00:15:48.619 Attaching to 0000:00:11.0 00:15:48.619 Attached to 0000:00:10.0 00:15:48.619 Attached to 0000:00:11.0 00:15:48.619 Initialization complete. Starting I/O... 00:15:48.619 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:48.619 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:15:48.619 00:15:49.999 QEMU NVMe Ctrl (12340 ): 1052 I/Os completed (+1052) 00:15:49.999 QEMU NVMe Ctrl (12341 ): 1054 I/Os completed (+1054) 00:15:49.999 00:15:50.936 QEMU NVMe Ctrl (12340 ): 2462 I/Os completed (+1410) 00:15:50.936 QEMU NVMe Ctrl (12341 ): 2480 I/Os completed (+1426) 00:15:50.936 00:15:51.872 QEMU NVMe Ctrl (12340 ): 4126 I/Os completed (+1664) 00:15:51.872 QEMU NVMe Ctrl (12341 ): 4144 I/Os completed (+1664) 00:15:51.872 00:15:52.807 QEMU NVMe Ctrl (12340 ): 5795 I/Os completed (+1669) 00:15:52.807 QEMU NVMe Ctrl (12341 ): 5821 I/Os completed (+1677) 00:15:52.807 00:15:53.788 QEMU NVMe Ctrl (12340 ): 7571 I/Os completed (+1776) 00:15:53.788 QEMU NVMe Ctrl (12341 ): 7597 I/Os completed (+1776) 00:15:53.788 00:15:54.355 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:54.355 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:54.355 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:54.614 [2024-11-06 13:41:48.337412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:54.615 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:54.615 [2024-11-06 13:41:48.339584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.339791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.339825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.339852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:54.615 [2024-11-06 13:41:48.343210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.343275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.343300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.343324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:54.615 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:54.615 [2024-11-06 13:41:48.371800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:54.615 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:54.615 [2024-11-06 13:41:48.373762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.373950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.373991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.374015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:54.615 [2024-11-06 13:41:48.377289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.377431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.377491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 [2024-11-06 13:41:48.377626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.615 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:54.615 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:15:54.615 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:54.615 EAL: Scan for (pci) bus failed. 00:15:54.615 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:54.615 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:54.615 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:54.615 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:54.874 Attaching to 0000:00:10.0 00:15:54.874 Attached to 0000:00:10.0 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:54.874 13:41:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:54.874 Attaching to 0000:00:11.0 00:15:54.874 Attached to 0000:00:11.0 00:15:55.817 QEMU NVMe Ctrl (12340 ): 1876 I/Os completed (+1876) 00:15:55.817 QEMU NVMe Ctrl (12341 ): 1676 I/Os completed (+1676) 00:15:55.817 00:15:56.754 QEMU NVMe Ctrl (12340 ): 3832 I/Os completed (+1956) 00:15:56.754 QEMU NVMe Ctrl (12341 ): 3634 I/Os completed (+1958) 00:15:56.754 00:15:57.691 QEMU NVMe Ctrl (12340 ): 5808 I/Os completed (+1976) 00:15:57.691 QEMU NVMe Ctrl (12341 ): 5613 I/Os completed (+1979) 00:15:57.691 00:15:58.626 QEMU NVMe Ctrl (12340 ): 7457 I/Os completed (+1649) 00:15:58.626 QEMU NVMe Ctrl (12341 ): 7265 I/Os completed (+1652) 00:15:58.626 00:16:00.025 QEMU NVMe Ctrl (12340 ): 9091 I/Os completed (+1634) 00:16:00.025 QEMU NVMe Ctrl (12341 ): 8909 I/Os completed (+1644) 00:16:00.025 00:16:00.960 QEMU NVMe Ctrl (12340 ): 10883 I/Os completed (+1792) 00:16:00.960 QEMU NVMe Ctrl (12341 ): 10703 I/Os completed (+1794) 00:16:00.960 00:16:01.895 QEMU NVMe Ctrl (12340 ): 12831 I/Os completed (+1948) 00:16:01.895 QEMU NVMe Ctrl (12341 ): 12651 I/Os completed (+1948) 00:16:01.895 00:16:02.831 QEMU NVMe Ctrl (12340 ): 14555 I/Os completed (+1724) 00:16:02.831 QEMU NVMe Ctrl (12341 ): 14375 I/Os completed (+1724) 00:16:02.831 00:16:03.804 QEMU NVMe Ctrl (12340 ): 16367 I/Os completed (+1812) 00:16:03.804 QEMU NVMe Ctrl (12341 ): 16187 I/Os completed (+1812) 00:16:03.804 00:16:04.739 QEMU NVMe Ctrl (12340 ): 18307 I/Os completed (+1940) 00:16:04.739 QEMU NVMe Ctrl (12341 ): 18127 I/Os completed (+1940) 00:16:04.739 00:16:05.673 QEMU NVMe Ctrl (12340 ): 20227 I/Os completed (+1920) 00:16:05.673 QEMU NVMe Ctrl (12341 ): 20047 I/Os completed (+1920) 00:16:05.673 00:16:06.606 QEMU NVMe Ctrl (12340 ): 22107 I/Os completed (+1880) 00:16:06.606 QEMU NVMe Ctrl (12341 ): 21952 I/Os completed (+1905) 00:16:06.606 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:06.865 [2024-11-06 13:42:00.735307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:06.865 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:06.865 [2024-11-06 13:42:00.737461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.737647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.737781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.737843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:06.865 [2024-11-06 13:42:00.743602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.743754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.743869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.743930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:06.865 [2024-11-06 13:42:00.772404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:06.865 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:06.865 [2024-11-06 13:42:00.774388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.774550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.774623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.774672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:06.865 [2024-11-06 13:42:00.777737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.777879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.778003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 [2024-11-06 13:42:00.778075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:06.865 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:06.865 EAL: Scan for (pci) bus failed. 00:16:06.865 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:07.124 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:07.124 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:07.124 13:42:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:07.124 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:07.124 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:07.124 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:07.124 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:07.124 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:07.124 Attaching to 0000:00:10.0 00:16:07.124 Attached to 0000:00:10.0 00:16:07.383 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:07.383 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:07.383 13:42:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:07.383 Attaching to 0000:00:11.0 00:16:07.383 Attached to 0000:00:11.0 00:16:07.641 QEMU NVMe Ctrl (12340 ): 1012 I/Os completed (+1012) 00:16:07.641 QEMU NVMe Ctrl (12341 ): 788 I/Os completed (+788) 00:16:07.641 00:16:09.016 QEMU NVMe Ctrl (12340 ): 2864 I/Os completed (+1852) 00:16:09.016 QEMU NVMe Ctrl (12341 ): 2640 I/Os completed (+1852) 00:16:09.016 00:16:09.950 QEMU NVMe Ctrl (12340 ): 4620 I/Os completed (+1756) 00:16:09.950 QEMU NVMe Ctrl (12341 ): 4400 I/Os completed (+1760) 00:16:09.950 00:16:10.885 QEMU NVMe Ctrl (12340 ): 6308 I/Os completed (+1688) 00:16:10.885 QEMU NVMe Ctrl (12341 ): 6091 I/Os completed (+1691) 00:16:10.885 00:16:11.821 QEMU NVMe Ctrl (12340 ): 8192 I/Os completed (+1884) 00:16:11.821 QEMU NVMe Ctrl (12341 ): 7986 I/Os completed (+1895) 00:16:11.821 00:16:12.758 QEMU NVMe Ctrl (12340 ): 10064 I/Os completed (+1872) 00:16:12.758 QEMU NVMe Ctrl (12341 ): 9860 I/Os completed (+1874) 00:16:12.758 00:16:13.694 QEMU NVMe Ctrl (12340 ): 11936 I/Os completed (+1872) 00:16:13.694 QEMU NVMe Ctrl (12341 ): 11732 I/Os completed (+1872) 00:16:13.694 00:16:14.631 QEMU NVMe Ctrl (12340 ): 13464 I/Os completed (+1528) 00:16:14.631 QEMU NVMe Ctrl (12341 ): 13279 I/Os completed (+1547) 00:16:14.631 00:16:16.008 QEMU NVMe Ctrl (12340 ): 15220 I/Os completed (+1756) 00:16:16.008 QEMU NVMe Ctrl (12341 ): 15035 I/Os completed (+1756) 00:16:16.008 00:16:16.946 QEMU NVMe Ctrl (12340 ): 17000 I/Os completed (+1780) 00:16:16.946 QEMU NVMe Ctrl (12341 ): 16822 I/Os completed (+1787) 00:16:16.946 00:16:17.881 QEMU NVMe Ctrl (12340 ): 18796 I/Os completed (+1796) 00:16:17.881 QEMU NVMe Ctrl (12341 ): 18620 I/Os completed (+1798) 00:16:17.881 00:16:18.816 QEMU NVMe Ctrl (12340 ): 20720 I/Os completed (+1924) 00:16:18.816 QEMU NVMe Ctrl (12341 ): 20544 I/Os completed (+1924) 00:16:18.816 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:19.382 [2024-11-06 13:42:13.164451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:19.382 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:19.382 [2024-11-06 13:42:13.166726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.166944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.167016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.167187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:19.382 [2024-11-06 13:42:13.170847] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.170974] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.171050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.171135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:19.382 [2024-11-06 13:42:13.204076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:19.382 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:19.382 [2024-11-06 13:42:13.206207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.206310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.206344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.206369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:19.382 [2024-11-06 13:42:13.209488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.209540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.209569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 [2024-11-06 13:42:13.209589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:19.382 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:19.641 Attaching to 0000:00:10.0 00:16:19.641 Attached to 0000:00:10.0 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:19.641 QEMU NVMe Ctrl (12340 ): 196 I/Os completed (+196) 00:16:19.641 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:19.641 13:42:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:19.641 Attaching to 0000:00:11.0 00:16:19.641 Attached to 0000:00:11.0 00:16:19.641 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:19.641 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:19.641 [2024-11-06 13:42:13.607668] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:16:31.852 13:42:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:31.852 13:42:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:31.852 13:42:25 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.27 00:16:31.852 13:42:25 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.27 00:16:31.852 13:42:25 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:16:31.852 13:42:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.27 00:16:31.852 13:42:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.27 2 00:16:31.852 remove_attach_helper took 43.27s to complete (handling 2 nvme drive(s)) 13:42:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68522 00:16:38.478 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68522) - No such process 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68522 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69061 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69061 00:16:38.478 13:42:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.478 13:42:31 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 69061 ']' 00:16:38.478 13:42:31 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.478 13:42:31 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:38.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.478 13:42:31 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.478 13:42:31 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:38.478 13:42:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:38.478 [2024-11-06 13:42:31.757888] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:16:38.478 [2024-11-06 13:42:31.758084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69061 ] 00:16:38.478 [2024-11-06 13:42:31.953275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.478 [2024-11-06 13:42:32.118173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:16:39.413 13:42:33 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:39.413 13:42:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:45.983 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:45.983 13:42:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.983 13:42:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:45.983 [2024-11-06 13:42:39.173079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:45.983 [2024-11-06 13:42:39.175664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.983 [2024-11-06 13:42:39.175726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.983 [2024-11-06 13:42:39.175748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.983 [2024-11-06 13:42:39.175774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.983 [2024-11-06 13:42:39.175786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.984 [2024-11-06 13:42:39.175801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.984 [2024-11-06 13:42:39.175814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.984 [2024-11-06 13:42:39.175828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.984 [2024-11-06 13:42:39.175840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.984 [2024-11-06 13:42:39.175859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.984 [2024-11-06 13:42:39.175871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.984 [2024-11-06 13:42:39.175885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.984 13:42:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:45.984 [2024-11-06 13:42:39.573089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:45.984 [2024-11-06 13:42:39.575733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.984 [2024-11-06 13:42:39.575778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.984 [2024-11-06 13:42:39.575798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.984 [2024-11-06 13:42:39.575822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.984 [2024-11-06 13:42:39.575836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.984 [2024-11-06 13:42:39.575848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.984 [2024-11-06 13:42:39.575864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.984 [2024-11-06 13:42:39.575876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.984 [2024-11-06 13:42:39.575890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.984 [2024-11-06 13:42:39.575904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:45.984 [2024-11-06 13:42:39.575918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.984 [2024-11-06 13:42:39.575930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:45.984 13:42:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.984 13:42:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:45.984 13:42:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:45.984 13:42:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:46.242 13:42:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:46.242 13:42:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:46.242 13:42:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:58.525 13:42:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.525 13:42:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:58.525 13:42:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:58.525 [2024-11-06 13:42:52.173353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:58.525 [2024-11-06 13:42:52.176428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.525 [2024-11-06 13:42:52.176482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.525 [2024-11-06 13:42:52.176502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.525 [2024-11-06 13:42:52.176532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.525 [2024-11-06 13:42:52.176546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.525 [2024-11-06 13:42:52.176563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.525 [2024-11-06 13:42:52.176579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.525 [2024-11-06 13:42:52.176594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.525 [2024-11-06 13:42:52.176608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.525 [2024-11-06 13:42:52.176625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.525 [2024-11-06 13:42:52.176638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.525 [2024-11-06 13:42:52.176654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:58.525 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:58.526 13:42:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.526 13:42:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:58.526 13:42:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.526 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:58.526 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:58.784 [2024-11-06 13:42:52.573363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:58.784 [2024-11-06 13:42:52.575989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.784 [2024-11-06 13:42:52.576044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.784 [2024-11-06 13:42:52.576069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.784 [2024-11-06 13:42:52.576092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.784 [2024-11-06 13:42:52.576107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.784 [2024-11-06 13:42:52.576120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.784 [2024-11-06 13:42:52.576135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.784 [2024-11-06 13:42:52.576147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.784 [2024-11-06 13:42:52.576161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.784 [2024-11-06 13:42:52.576173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:58.784 [2024-11-06 13:42:52.576187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.784 [2024-11-06 13:42:52.576199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.784 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:58.784 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:58.784 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:58.784 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:58.784 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:58.784 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:58.784 13:42:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.784 13:42:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:58.784 13:42:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.042 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:59.042 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:59.042 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:59.042 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:59.042 13:42:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:59.300 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:59.301 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:59.301 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:59.301 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:59.301 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:59.301 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:59.301 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:59.301 13:42:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:11.509 13:43:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.509 13:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:11.509 13:43:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:11.509 [2024-11-06 13:43:05.273654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:11.509 [2024-11-06 13:43:05.276570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.509 [2024-11-06 13:43:05.276618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.509 [2024-11-06 13:43:05.276637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.509 [2024-11-06 13:43:05.276670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.509 [2024-11-06 13:43:05.276682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.509 [2024-11-06 13:43:05.276702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.509 [2024-11-06 13:43:05.276715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.509 [2024-11-06 13:43:05.276730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.509 [2024-11-06 13:43:05.276743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.509 [2024-11-06 13:43:05.276760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.509 [2024-11-06 13:43:05.276772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.509 [2024-11-06 13:43:05.276787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.509 13:43:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.509 13:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:11.509 13:43:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:11.509 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:11.767 [2024-11-06 13:43:05.673671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:11.767 [2024-11-06 13:43:05.676606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.767 [2024-11-06 13:43:05.676652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.767 [2024-11-06 13:43:05.676675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.767 [2024-11-06 13:43:05.676705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.767 [2024-11-06 13:43:05.676721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.767 [2024-11-06 13:43:05.676734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.767 [2024-11-06 13:43:05.676751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.768 [2024-11-06 13:43:05.676763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.768 [2024-11-06 13:43:05.676783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:11.768 [2024-11-06 13:43:05.676797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:11.768 [2024-11-06 13:43:05.676813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:11.768 [2024-11-06 13:43:05.676825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:12.026 13:43:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.026 13:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:12.026 13:43:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:12.026 13:43:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:12.285 13:43:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.22 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.22 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.22 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.22 2 00:17:24.493 remove_attach_helper took 45.22s to complete (handling 2 nvme drive(s)) 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:17:24.493 13:43:18 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:24.493 13:43:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:31.080 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:31.080 13:43:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.080 13:43:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:31.080 [2024-11-06 13:43:24.425312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:31.080 [2024-11-06 13:43:24.428284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.080 [2024-11-06 13:43:24.428335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.080 [2024-11-06 13:43:24.428355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.080 [2024-11-06 13:43:24.428388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.080 [2024-11-06 13:43:24.428402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.080 [2024-11-06 13:43:24.428418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.080 [2024-11-06 13:43:24.428432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.080 [2024-11-06 13:43:24.428448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.081 [2024-11-06 13:43:24.428460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.081 [2024-11-06 13:43:24.428478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.081 [2024-11-06 13:43:24.428490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.081 [2024-11-06 13:43:24.428510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.081 13:43:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:31.081 [2024-11-06 13:43:24.925332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:31.081 [2024-11-06 13:43:24.927806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.081 [2024-11-06 13:43:24.927851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.081 [2024-11-06 13:43:24.927875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.081 [2024-11-06 13:43:24.927904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.081 [2024-11-06 13:43:24.927920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.081 [2024-11-06 13:43:24.927933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.081 [2024-11-06 13:43:24.927951] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.081 [2024-11-06 13:43:24.927963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.081 [2024-11-06 13:43:24.927979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.081 [2024-11-06 13:43:24.927993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.081 [2024-11-06 13:43:24.928008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.081 [2024-11-06 13:43:24.928037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:31.081 13:43:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:31.081 13:43:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.081 13:43:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:31.081 13:43:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.081 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:31.081 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.339 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:31.598 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:31.598 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.598 13:43:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.811 13:43:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.811 13:43:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.811 13:43:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.811 [2024-11-06 13:43:37.425581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:43.811 [2024-11-06 13:43:37.432401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.811 [2024-11-06 13:43:37.432586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.811 [2024-11-06 13:43:37.432714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.811 [2024-11-06 13:43:37.432794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.811 [2024-11-06 13:43:37.432887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.811 [2024-11-06 13:43:37.432955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.811 [2024-11-06 13:43:37.433077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.811 [2024-11-06 13:43:37.433201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.811 [2024-11-06 13:43:37.433302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.811 [2024-11-06 13:43:37.433513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.811 [2024-11-06 13:43:37.433551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.811 [2024-11-06 13:43:37.433607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.811 13:43:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.811 13:43:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.811 13:43:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:43.811 13:43:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:44.069 [2024-11-06 13:43:37.925616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:44.069 [2024-11-06 13:43:37.928176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.069 [2024-11-06 13:43:37.928367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.069 [2024-11-06 13:43:37.928520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.069 [2024-11-06 13:43:37.928651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.069 [2024-11-06 13:43:37.928698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.069 [2024-11-06 13:43:37.928799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.069 [2024-11-06 13:43:37.928861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.069 [2024-11-06 13:43:37.928935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.069 [2024-11-06 13:43:37.928996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.069 [2024-11-06 13:43:37.929113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.069 [2024-11-06 13:43:37.929163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.069 [2024-11-06 13:43:37.929217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.069 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:44.069 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:44.069 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:44.069 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:44.069 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:44.069 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:44.069 13:43:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.069 13:43:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:44.069 13:43:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.331 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:44.589 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:44.589 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.589 13:43:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.845 13:43:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.845 13:43:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.845 13:43:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.845 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.845 [2024-11-06 13:43:50.526493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:56.845 [2024-11-06 13:43:50.529083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.845 [2024-11-06 13:43:50.529233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.845 [2024-11-06 13:43:50.529395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.846 [2024-11-06 13:43:50.529472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.846 [2024-11-06 13:43:50.529560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.846 [2024-11-06 13:43:50.529624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.846 [2024-11-06 13:43:50.529722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.846 [2024-11-06 13:43:50.529771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.846 [2024-11-06 13:43:50.529942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.846 [2024-11-06 13:43:50.530004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.846 [2024-11-06 13:43:50.530054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.846 [2024-11-06 13:43:50.530183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.846 13:43:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.846 13:43:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 13:43:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:56.846 13:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:57.104 [2024-11-06 13:43:50.926499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:57.104 [2024-11-06 13:43:50.928961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.104 [2024-11-06 13:43:50.929150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.104 [2024-11-06 13:43:50.929316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.104 [2024-11-06 13:43:50.929443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.104 [2024-11-06 13:43:50.929487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.104 [2024-11-06 13:43:50.929582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.104 [2024-11-06 13:43:50.929644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.104 [2024-11-06 13:43:50.929718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.104 [2024-11-06 13:43:50.929829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.104 [2024-11-06 13:43:50.929961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.104 [2024-11-06 13:43:50.930060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.104 [2024-11-06 13:43:50.930120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:57.363 13:43:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:57.363 13:43:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:57.363 13:43:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:57.363 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:57.622 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.622 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.622 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.622 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:57.622 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:57.622 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.622 13:43:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.18 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.18 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:18:09.850 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:09.850 13:44:03 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69061 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 69061 ']' 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 69061 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69061 00:18:09.850 killing process with pid 69061 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69061' 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@971 -- # kill 69061 00:18:09.850 13:44:03 sw_hotplug -- common/autotest_common.sh@976 -- # wait 69061 00:18:12.390 13:44:06 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:12.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.215 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:13.215 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:13.215 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.215 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.473 00:18:13.473 real 2m33.430s 00:18:13.473 user 1m51.319s 00:18:13.473 sys 0m22.570s 00:18:13.473 13:44:07 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:13.473 13:44:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:13.473 ************************************ 00:18:13.473 END TEST sw_hotplug 00:18:13.473 ************************************ 00:18:13.473 13:44:07 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:13.473 13:44:07 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:13.473 13:44:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:13.473 13:44:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:13.473 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:18:13.473 ************************************ 00:18:13.473 START TEST nvme_xnvme 00:18:13.473 ************************************ 00:18:13.473 13:44:07 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:13.473 * Looking for test storage... 00:18:13.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:13.473 13:44:07 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:13.473 13:44:07 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:18:13.473 13:44:07 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:13.731 13:44:07 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.731 13:44:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.732 --rc genhtml_branch_coverage=1 00:18:13.732 --rc genhtml_function_coverage=1 00:18:13.732 --rc genhtml_legend=1 00:18:13.732 --rc geninfo_all_blocks=1 00:18:13.732 --rc geninfo_unexecuted_blocks=1 00:18:13.732 00:18:13.732 ' 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.732 --rc genhtml_branch_coverage=1 00:18:13.732 --rc genhtml_function_coverage=1 00:18:13.732 --rc genhtml_legend=1 00:18:13.732 --rc geninfo_all_blocks=1 00:18:13.732 --rc geninfo_unexecuted_blocks=1 00:18:13.732 00:18:13.732 ' 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.732 --rc genhtml_branch_coverage=1 00:18:13.732 --rc genhtml_function_coverage=1 00:18:13.732 --rc genhtml_legend=1 00:18:13.732 --rc geninfo_all_blocks=1 00:18:13.732 --rc geninfo_unexecuted_blocks=1 00:18:13.732 00:18:13.732 ' 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:13.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.732 --rc genhtml_branch_coverage=1 00:18:13.732 --rc genhtml_function_coverage=1 00:18:13.732 --rc genhtml_legend=1 00:18:13.732 --rc geninfo_all_blocks=1 00:18:13.732 --rc geninfo_unexecuted_blocks=1 00:18:13.732 00:18:13.732 ' 00:18:13.732 13:44:07 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.732 13:44:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.732 13:44:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.732 13:44:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.732 13:44:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.732 13:44:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:13.732 13:44:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.732 13:44:07 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:13.732 13:44:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:13.732 ************************************ 00:18:13.732 START TEST xnvme_to_malloc_dd_copy 00:18:13.732 ************************************ 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:13.732 13:44:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:13.732 { 00:18:13.732 "subsystems": [ 00:18:13.732 { 00:18:13.732 "subsystem": "bdev", 00:18:13.732 "config": [ 00:18:13.732 { 00:18:13.732 "params": { 00:18:13.732 "block_size": 512, 00:18:13.732 "num_blocks": 2097152, 00:18:13.732 "name": "malloc0" 00:18:13.732 }, 00:18:13.732 "method": "bdev_malloc_create" 00:18:13.732 }, 00:18:13.732 { 00:18:13.732 "params": { 00:18:13.732 "io_mechanism": "libaio", 00:18:13.732 "filename": "/dev/nullb0", 00:18:13.732 "name": "null0" 00:18:13.732 }, 00:18:13.732 "method": "bdev_xnvme_create" 00:18:13.732 }, 00:18:13.732 { 00:18:13.732 "method": "bdev_wait_for_examine" 00:18:13.732 } 00:18:13.732 ] 00:18:13.732 } 00:18:13.732 ] 00:18:13.732 } 00:18:13.732 [2024-11-06 13:44:07.676738] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:18:13.732 [2024-11-06 13:44:07.676982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70424 ] 00:18:13.991 [2024-11-06 13:44:07.848634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.991 [2024-11-06 13:44:07.962566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.522  [2024-11-06T13:44:11.880Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-06T13:44:12.816Z] Copying: 467/1024 [MB] (244 MBps) [2024-11-06T13:44:13.750Z] Copying: 713/1024 [MB] (246 MBps) [2024-11-06T13:44:13.750Z] Copying: 960/1024 [MB] (246 MBps) [2024-11-06T13:44:19.020Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:18:25.037 00:18:25.037 13:44:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:18:25.037 13:44:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:18:25.037 13:44:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:25.037 13:44:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:25.037 { 00:18:25.037 "subsystems": [ 00:18:25.037 { 00:18:25.037 "subsystem": "bdev", 00:18:25.037 "config": [ 00:18:25.037 { 00:18:25.037 "params": { 00:18:25.037 "block_size": 512, 00:18:25.037 "num_blocks": 2097152, 00:18:25.037 "name": "malloc0" 00:18:25.037 }, 00:18:25.037 "method": "bdev_malloc_create" 00:18:25.037 }, 00:18:25.037 { 00:18:25.037 "params": { 00:18:25.037 "io_mechanism": "libaio", 00:18:25.037 "filename": "/dev/nullb0", 00:18:25.037 "name": "null0" 00:18:25.037 }, 00:18:25.037 "method": "bdev_xnvme_create" 00:18:25.037 }, 00:18:25.037 { 00:18:25.037 "method": "bdev_wait_for_examine" 00:18:25.037 } 00:18:25.037 ] 00:18:25.037 } 00:18:25.037 ] 00:18:25.037 } 00:18:25.037 [2024-11-06 13:44:18.326461] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:18:25.037 [2024-11-06 13:44:18.326875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70544 ] 00:18:25.037 [2024-11-06 13:44:18.514456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.037 [2024-11-06 13:44:18.655776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.566  [2024-11-06T13:44:22.485Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-06T13:44:23.419Z] Copying: 495/1024 [MB] (248 MBps) [2024-11-06T13:44:24.795Z] Copying: 745/1024 [MB] (249 MBps) [2024-11-06T13:44:24.795Z] Copying: 998/1024 [MB] (253 MBps) [2024-11-06T13:44:28.983Z] Copying: 1024/1024 [MB] (average 249 MBps) 00:18:35.000 00:18:35.000 13:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:18:35.000 13:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:35.000 13:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:18:35.000 13:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:18:35.000 13:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:35.000 13:44:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:35.258 { 00:18:35.258 "subsystems": [ 00:18:35.258 { 00:18:35.258 "subsystem": "bdev", 00:18:35.258 "config": [ 00:18:35.258 { 00:18:35.258 "params": { 00:18:35.258 "block_size": 512, 00:18:35.258 "num_blocks": 2097152, 00:18:35.258 "name": "malloc0" 00:18:35.258 }, 00:18:35.258 "method": "bdev_malloc_create" 00:18:35.258 }, 00:18:35.258 { 00:18:35.258 "params": { 00:18:35.258 "io_mechanism": "io_uring", 00:18:35.258 "filename": "/dev/nullb0", 00:18:35.258 "name": "null0" 00:18:35.258 }, 00:18:35.258 "method": "bdev_xnvme_create" 00:18:35.258 }, 00:18:35.258 { 00:18:35.258 "method": "bdev_wait_for_examine" 00:18:35.258 } 00:18:35.258 ] 00:18:35.258 } 00:18:35.258 ] 00:18:35.258 } 00:18:35.258 [2024-11-06 13:44:29.053568] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:18:35.258 [2024-11-06 13:44:29.054083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70665 ] 00:18:35.516 [2024-11-06 13:44:29.246626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.516 [2024-11-06 13:44:29.387482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.803  [2024-11-06T13:44:33.353Z] Copying: 259/1024 [MB] (259 MBps) [2024-11-06T13:44:34.289Z] Copying: 508/1024 [MB] (249 MBps) [2024-11-06T13:44:35.224Z] Copying: 756/1024 [MB] (248 MBps) [2024-11-06T13:44:35.224Z] Copying: 1004/1024 [MB] (247 MBps) [2024-11-06T13:44:40.492Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:18:46.509 00:18:46.509 13:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:18:46.509 13:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:18:46.509 13:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:18:46.509 13:44:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:46.509 { 00:18:46.509 "subsystems": [ 00:18:46.509 { 00:18:46.509 "subsystem": "bdev", 00:18:46.509 "config": [ 00:18:46.509 { 00:18:46.509 "params": { 00:18:46.509 "block_size": 512, 00:18:46.509 "num_blocks": 2097152, 00:18:46.509 "name": "malloc0" 00:18:46.509 }, 00:18:46.509 "method": "bdev_malloc_create" 00:18:46.509 }, 00:18:46.509 { 00:18:46.509 "params": { 00:18:46.509 "io_mechanism": "io_uring", 00:18:46.509 "filename": "/dev/nullb0", 00:18:46.509 "name": "null0" 00:18:46.509 }, 00:18:46.509 "method": "bdev_xnvme_create" 00:18:46.509 }, 00:18:46.509 { 00:18:46.509 "method": "bdev_wait_for_examine" 00:18:46.509 } 00:18:46.509 ] 00:18:46.509 } 00:18:46.509 ] 00:18:46.509 } 00:18:46.509 [2024-11-06 13:44:39.782092] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:18:46.509 [2024-11-06 13:44:39.782257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70786 ] 00:18:46.509 [2024-11-06 13:44:39.956838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.509 [2024-11-06 13:44:40.101870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.035  [2024-11-06T13:44:43.953Z] Copying: 267/1024 [MB] (267 MBps) [2024-11-06T13:44:44.888Z] Copying: 534/1024 [MB] (267 MBps) [2024-11-06T13:44:45.824Z] Copying: 799/1024 [MB] (265 MBps) [2024-11-06T13:44:51.089Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:18:57.106 00:18:57.106 13:44:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:18:57.106 13:44:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:18:57.106 ************************************ 00:18:57.106 END TEST xnvme_to_malloc_dd_copy 00:18:57.106 ************************************ 00:18:57.106 00:18:57.106 real 0m42.630s 00:18:57.106 user 0m36.702s 00:18:57.106 sys 0m5.398s 00:18:57.106 13:44:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.106 13:44:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:18:57.106 13:44:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:57.106 13:44:50 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:57.106 13:44:50 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.106 13:44:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.106 ************************************ 00:18:57.106 START TEST xnvme_bdevperf 00:18:57.106 ************************************ 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:57.106 13:44:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:57.106 { 00:18:57.106 "subsystems": [ 00:18:57.106 { 00:18:57.106 "subsystem": "bdev", 00:18:57.106 "config": [ 00:18:57.106 { 00:18:57.106 "params": { 00:18:57.106 "io_mechanism": "libaio", 00:18:57.106 "filename": "/dev/nullb0", 00:18:57.106 "name": "null0" 00:18:57.106 }, 00:18:57.106 "method": "bdev_xnvme_create" 00:18:57.106 }, 00:18:57.106 { 00:18:57.106 "method": "bdev_wait_for_examine" 00:18:57.106 } 00:18:57.106 ] 00:18:57.106 } 00:18:57.106 ] 00:18:57.106 } 00:18:57.106 [2024-11-06 13:44:50.383668] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:18:57.106 [2024-11-06 13:44:50.383832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70931 ] 00:18:57.106 [2024-11-06 13:44:50.578273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.106 [2024-11-06 13:44:50.725092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.364 Running I/O for 5 seconds... 00:18:59.232 147136.00 IOPS, 574.75 MiB/s [2024-11-06T13:44:54.592Z] 143232.00 IOPS, 559.50 MiB/s [2024-11-06T13:44:55.528Z] 144682.67 IOPS, 565.17 MiB/s [2024-11-06T13:44:56.465Z] 145440.00 IOPS, 568.12 MiB/s 00:19:02.482 Latency(us) 00:19:02.482 [2024-11-06T13:44:56.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.482 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:02.482 null0 : 5.00 145908.02 569.95 0.00 0.00 436.00 132.63 1958.28 00:19:02.482 [2024-11-06T13:44:56.465Z] =================================================================================================================== 00:19:02.482 [2024-11-06T13:44:56.465Z] Total : 145908.02 569.95 0.00 0.00 436.00 132.63 1958.28 00:19:03.858 13:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:19:03.858 13:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:19:03.858 13:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:19:03.858 13:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:19:03.858 13:44:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:03.858 13:44:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:03.858 { 00:19:03.858 "subsystems": [ 00:19:03.858 { 00:19:03.858 "subsystem": "bdev", 00:19:03.858 "config": [ 00:19:03.858 { 00:19:03.858 "params": { 00:19:03.858 "io_mechanism": "io_uring", 00:19:03.858 "filename": "/dev/nullb0", 00:19:03.859 "name": "null0" 00:19:03.859 }, 00:19:03.859 "method": "bdev_xnvme_create" 00:19:03.859 }, 00:19:03.859 { 00:19:03.859 "method": "bdev_wait_for_examine" 00:19:03.859 } 00:19:03.859 ] 00:19:03.859 } 00:19:03.859 ] 00:19:03.859 } 00:19:03.859 [2024-11-06 13:44:57.593192] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:19:03.859 [2024-11-06 13:44:57.593370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71011 ] 00:19:03.859 [2024-11-06 13:44:57.779932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.117 [2024-11-06 13:44:57.925187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.684 Running I/O for 5 seconds... 00:19:06.554 187904.00 IOPS, 734.00 MiB/s [2024-11-06T13:45:01.472Z] 191840.00 IOPS, 749.38 MiB/s [2024-11-06T13:45:02.406Z] 192725.33 IOPS, 752.83 MiB/s [2024-11-06T13:45:03.781Z] 192208.00 IOPS, 750.81 MiB/s 00:19:09.798 Latency(us) 00:19:09.798 [2024-11-06T13:45:03.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.798 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:09.798 null0 : 5.00 191757.62 749.05 0.00 0.00 331.31 197.00 1864.66 00:19:09.798 [2024-11-06T13:45:03.781Z] =================================================================================================================== 00:19:09.798 [2024-11-06T13:45:03.781Z] Total : 191757.62 749.05 0.00 0.00 331.31 197.00 1864.66 00:19:10.734 13:45:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:19:10.734 13:45:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:19:10.734 ************************************ 00:19:10.734 END TEST xnvme_bdevperf 00:19:10.734 ************************************ 00:19:10.734 00:19:10.734 real 0m14.440s 00:19:10.734 user 0m10.634s 00:19:10.734 sys 0m3.569s 00:19:10.734 13:45:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:10.734 13:45:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.992 ************************************ 00:19:10.992 END TEST nvme_xnvme 00:19:10.992 ************************************ 00:19:10.992 00:19:10.992 real 0m57.402s 00:19:10.992 user 0m47.509s 00:19:10.992 sys 0m9.134s 00:19:10.992 13:45:04 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:10.992 13:45:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.992 13:45:04 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:10.992 13:45:04 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:10.992 13:45:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:10.992 13:45:04 -- common/autotest_common.sh@10 -- # set +x 00:19:10.992 ************************************ 00:19:10.992 START TEST blockdev_xnvme 00:19:10.992 ************************************ 00:19:10.992 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:10.992 * Looking for test storage... 00:19:10.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:10.992 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:10.992 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:19:10.992 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:11.251 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.251 13:45:04 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:11.251 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.251 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.251 --rc genhtml_branch_coverage=1 00:19:11.251 --rc genhtml_function_coverage=1 00:19:11.251 --rc genhtml_legend=1 00:19:11.251 --rc geninfo_all_blocks=1 00:19:11.251 --rc geninfo_unexecuted_blocks=1 00:19:11.251 00:19:11.251 ' 00:19:11.251 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.251 --rc genhtml_branch_coverage=1 00:19:11.251 --rc genhtml_function_coverage=1 00:19:11.251 --rc genhtml_legend=1 00:19:11.251 --rc geninfo_all_blocks=1 00:19:11.251 --rc geninfo_unexecuted_blocks=1 00:19:11.251 00:19:11.251 ' 00:19:11.251 13:45:04 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.251 --rc genhtml_branch_coverage=1 00:19:11.251 --rc genhtml_function_coverage=1 00:19:11.251 --rc genhtml_legend=1 00:19:11.251 --rc geninfo_all_blocks=1 00:19:11.251 --rc geninfo_unexecuted_blocks=1 00:19:11.251 00:19:11.251 ' 00:19:11.251 13:45:05 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.251 --rc genhtml_branch_coverage=1 00:19:11.251 --rc genhtml_function_coverage=1 00:19:11.251 --rc genhtml_legend=1 00:19:11.251 --rc geninfo_all_blocks=1 00:19:11.251 --rc geninfo_unexecuted_blocks=1 00:19:11.251 00:19:11.251 ' 00:19:11.251 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71164 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71164 00:19:11.252 13:45:05 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 71164 ']' 00:19:11.252 13:45:05 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.252 13:45:05 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:11.252 13:45:05 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.252 13:45:05 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.252 13:45:05 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.252 13:45:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.252 [2024-11-06 13:45:05.119727] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:19:11.252 [2024-11-06 13:45:05.120057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:19:11.511 [2024-11-06 13:45:05.305609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.511 [2024-11-06 13:45:05.470571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.445 13:45:06 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:12.445 13:45:06 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:19:12.445 13:45:06 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:12.445 13:45:06 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:12.445 13:45:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:12.445 13:45:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:12.445 13:45:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:13.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.270 Waiting for block devices as requested 00:19:13.271 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.271 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.530 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.530 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:18.811 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:18.811 13:45:12 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:19:18.811 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:19:18.812 nvme0n1 00:19:18.812 nvme1n1 00:19:18.812 nvme2n1 00:19:18.812 nvme2n2 00:19:18.812 nvme2n3 00:19:18.812 nvme3n1 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:18.812 13:45:12 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:18.812 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "df38f8ae-3973-415a-bb97-34d164a84d19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "df38f8ae-3973-415a-bb97-34d164a84d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "eebf42c9-10a2-445f-8011-30c8aed21738"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "eebf42c9-10a2-445f-8011-30c8aed21738",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "934c363a-f703-464e-80ab-3ce0771b9274"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "934c363a-f703-464e-80ab-3ce0771b9274",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f1b6a21c-7389-4375-903a-8b7ce67c6d50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1b6a21c-7389-4375-903a-8b7ce67c6d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8d73fdc5-67f1-4a9e-a751-4f06a368a357"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d73fdc5-67f1-4a9e-a751-4f06a368a357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "51ad4151-dd29-468b-a8eb-02511994467c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "51ad4151-dd29-468b-a8eb-02511994467c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:19.070 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:19.070 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:19.070 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:19.070 13:45:12 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71164 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 71164 ']' 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 71164 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71164 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:19.070 killing process with pid 71164 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71164' 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 71164 00:19:19.070 13:45:12 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 71164 00:19:22.353 13:45:15 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:22.353 13:45:15 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:22.353 13:45:15 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:22.353 13:45:15 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:22.353 13:45:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.353 ************************************ 00:19:22.353 START TEST bdev_hello_world 00:19:22.353 ************************************ 00:19:22.353 13:45:15 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:22.353 [2024-11-06 13:45:15.713359] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:19:22.353 [2024-11-06 13:45:15.713780] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71540 ] 00:19:22.353 [2024-11-06 13:45:15.897300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.353 [2024-11-06 13:45:16.042222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.612 [2024-11-06 13:45:16.555682] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:22.612 [2024-11-06 13:45:16.555756] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:22.612 [2024-11-06 13:45:16.555779] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:22.612 [2024-11-06 13:45:16.558378] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:22.612 [2024-11-06 13:45:16.558932] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:22.612 [2024-11-06 13:45:16.558965] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:22.612 [2024-11-06 13:45:16.559199] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:22.612 00:19:22.612 [2024-11-06 13:45:16.559223] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:23.987 00:19:23.987 ************************************ 00:19:23.987 END TEST bdev_hello_world 00:19:23.987 ************************************ 00:19:23.987 real 0m2.229s 00:19:23.987 user 0m1.770s 00:19:23.987 sys 0m0.338s 00:19:23.987 13:45:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.987 13:45:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:23.987 13:45:17 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:23.987 13:45:17 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:23.987 13:45:17 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.987 13:45:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:23.987 ************************************ 00:19:23.987 START TEST bdev_bounds 00:19:23.987 ************************************ 00:19:23.987 13:45:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:19:23.987 Process bdevio pid: 71582 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71582 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71582' 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71582 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 71582 ']' 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.988 13:45:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:24.252 [2024-11-06 13:45:18.009186] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:19:24.252 [2024-11-06 13:45:18.009596] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71582 ] 00:19:24.252 [2024-11-06 13:45:18.205241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.514 [2024-11-06 13:45:18.358457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.514 [2024-11-06 13:45:18.358538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.514 [2024-11-06 13:45:18.358523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.082 13:45:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:25.082 13:45:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:19:25.082 13:45:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:25.342 I/O targets: 00:19:25.342 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:25.342 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:25.342 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:25.342 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:25.342 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:25.342 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:25.342 00:19:25.342 00:19:25.342 CUnit - A unit testing framework for C - Version 2.1-3 00:19:25.342 http://cunit.sourceforge.net/ 00:19:25.342 00:19:25.342 00:19:25.342 Suite: bdevio tests on: nvme3n1 00:19:25.342 Test: blockdev write read block ...passed 00:19:25.342 Test: blockdev write zeroes read block ...passed 00:19:25.342 Test: blockdev write zeroes read no split ...passed 00:19:25.342 Test: blockdev write zeroes read split ...passed 00:19:25.342 Test: blockdev write zeroes read split partial ...passed 00:19:25.342 Test: blockdev reset ...passed 00:19:25.342 Test: blockdev write read 8 blocks ...passed 00:19:25.342 Test: blockdev write read size > 128k ...passed 00:19:25.342 Test: blockdev write read invalid size ...passed 00:19:25.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.342 Test: blockdev write read max offset ...passed 00:19:25.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.342 Test: blockdev writev readv 8 blocks ...passed 00:19:25.342 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.342 Test: blockdev writev readv block ...passed 00:19:25.342 Test: blockdev writev readv size > 128k ...passed 00:19:25.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.342 Test: blockdev comparev and writev ...passed 00:19:25.342 Test: blockdev nvme passthru rw ...passed 00:19:25.342 Test: blockdev nvme passthru vendor specific ...passed 00:19:25.342 Test: blockdev nvme admin passthru ...passed 00:19:25.342 Test: blockdev copy ...passed 00:19:25.342 Suite: bdevio tests on: nvme2n3 00:19:25.342 Test: blockdev write read block ...passed 00:19:25.342 Test: blockdev write zeroes read block ...passed 00:19:25.342 Test: blockdev write zeroes read no split ...passed 00:19:25.342 Test: blockdev write zeroes read split ...passed 00:19:25.342 Test: blockdev write zeroes read split partial ...passed 00:19:25.342 Test: blockdev reset ...passed 00:19:25.342 Test: blockdev write read 8 blocks ...passed 00:19:25.342 Test: blockdev write read size > 128k ...passed 00:19:25.342 Test: blockdev write read invalid size ...passed 00:19:25.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.342 Test: blockdev write read max offset ...passed 00:19:25.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.342 Test: blockdev writev readv 8 blocks ...passed 00:19:25.342 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.342 Test: blockdev writev readv block ...passed 00:19:25.342 Test: blockdev writev readv size > 128k ...passed 00:19:25.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.342 Test: blockdev comparev and writev ...passed 00:19:25.342 Test: blockdev nvme passthru rw ...passed 00:19:25.342 Test: blockdev nvme passthru vendor specific ...passed 00:19:25.342 Test: blockdev nvme admin passthru ...passed 00:19:25.342 Test: blockdev copy ...passed 00:19:25.342 Suite: bdevio tests on: nvme2n2 00:19:25.342 Test: blockdev write read block ...passed 00:19:25.342 Test: blockdev write zeroes read block ...passed 00:19:25.342 Test: blockdev write zeroes read no split ...passed 00:19:25.342 Test: blockdev write zeroes read split ...passed 00:19:25.602 Test: blockdev write zeroes read split partial ...passed 00:19:25.602 Test: blockdev reset ...passed 00:19:25.602 Test: blockdev write read 8 blocks ...passed 00:19:25.602 Test: blockdev write read size > 128k ...passed 00:19:25.602 Test: blockdev write read invalid size ...passed 00:19:25.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.602 Test: blockdev write read max offset ...passed 00:19:25.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.602 Test: blockdev writev readv 8 blocks ...passed 00:19:25.602 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.602 Test: blockdev writev readv block ...passed 00:19:25.602 Test: blockdev writev readv size > 128k ...passed 00:19:25.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.602 Test: blockdev comparev and writev ...passed 00:19:25.602 Test: blockdev nvme passthru rw ...passed 00:19:25.602 Test: blockdev nvme passthru vendor specific ...passed 00:19:25.602 Test: blockdev nvme admin passthru ...passed 00:19:25.602 Test: blockdev copy ...passed 00:19:25.602 Suite: bdevio tests on: nvme2n1 00:19:25.602 Test: blockdev write read block ...passed 00:19:25.602 Test: blockdev write zeroes read block ...passed 00:19:25.602 Test: blockdev write zeroes read no split ...passed 00:19:25.602 Test: blockdev write zeroes read split ...passed 00:19:25.602 Test: blockdev write zeroes read split partial ...passed 00:19:25.602 Test: blockdev reset ...passed 00:19:25.602 Test: blockdev write read 8 blocks ...passed 00:19:25.602 Test: blockdev write read size > 128k ...passed 00:19:25.602 Test: blockdev write read invalid size ...passed 00:19:25.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.602 Test: blockdev write read max offset ...passed 00:19:25.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.602 Test: blockdev writev readv 8 blocks ...passed 00:19:25.602 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.602 Test: blockdev writev readv block ...passed 00:19:25.602 Test: blockdev writev readv size > 128k ...passed 00:19:25.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.602 Test: blockdev comparev and writev ...passed 00:19:25.602 Test: blockdev nvme passthru rw ...passed 00:19:25.602 Test: blockdev nvme passthru vendor specific ...passed 00:19:25.602 Test: blockdev nvme admin passthru ...passed 00:19:25.602 Test: blockdev copy ...passed 00:19:25.602 Suite: bdevio tests on: nvme1n1 00:19:25.602 Test: blockdev write read block ...passed 00:19:25.602 Test: blockdev write zeroes read block ...passed 00:19:25.602 Test: blockdev write zeroes read no split ...passed 00:19:25.602 Test: blockdev write zeroes read split ...passed 00:19:25.861 Test: blockdev write zeroes read split partial ...passed 00:19:25.861 Test: blockdev reset ...passed 00:19:25.861 Test: blockdev write read 8 blocks ...passed 00:19:25.861 Test: blockdev write read size > 128k ...passed 00:19:25.861 Test: blockdev write read invalid size ...passed 00:19:25.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.861 Test: blockdev write read max offset ...passed 00:19:25.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.861 Test: blockdev writev readv 8 blocks ...passed 00:19:25.861 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.861 Test: blockdev writev readv block ...passed 00:19:25.861 Test: blockdev writev readv size > 128k ...passed 00:19:25.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.861 Test: blockdev comparev and writev ...passed 00:19:25.861 Test: blockdev nvme passthru rw ...passed 00:19:25.861 Test: blockdev nvme passthru vendor specific ...passed 00:19:25.861 Test: blockdev nvme admin passthru ...passed 00:19:25.861 Test: blockdev copy ...passed 00:19:25.861 Suite: bdevio tests on: nvme0n1 00:19:25.861 Test: blockdev write read block ...passed 00:19:25.861 Test: blockdev write zeroes read block ...passed 00:19:25.861 Test: blockdev write zeroes read no split ...passed 00:19:25.861 Test: blockdev write zeroes read split ...passed 00:19:25.861 Test: blockdev write zeroes read split partial ...passed 00:19:25.861 Test: blockdev reset ...passed 00:19:25.861 Test: blockdev write read 8 blocks ...passed 00:19:25.861 Test: blockdev write read size > 128k ...passed 00:19:25.861 Test: blockdev write read invalid size ...passed 00:19:25.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.861 Test: blockdev write read max offset ...passed 00:19:25.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.861 Test: blockdev writev readv 8 blocks ...passed 00:19:25.861 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.861 Test: blockdev writev readv block ...passed 00:19:25.861 Test: blockdev writev readv size > 128k ...passed 00:19:25.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.861 Test: blockdev comparev and writev ...passed 00:19:25.861 Test: blockdev nvme passthru rw ...passed 00:19:25.861 Test: blockdev nvme passthru vendor specific ...passed 00:19:25.861 Test: blockdev nvme admin passthru ...passed 00:19:25.861 Test: blockdev copy ...passed 00:19:25.861 00:19:25.861 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.861 suites 6 6 n/a 0 0 00:19:25.861 tests 138 138 138 0 0 00:19:25.861 asserts 780 780 780 0 n/a 00:19:25.861 00:19:25.861 Elapsed time = 1.714 seconds 00:19:25.861 0 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71582 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 71582 ']' 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 71582 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71582 00:19:25.861 killing process with pid 71582 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71582' 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 71582 00:19:25.861 13:45:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 71582 00:19:27.238 13:45:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:27.238 00:19:27.238 real 0m3.189s 00:19:27.238 user 0m7.910s 00:19:27.238 sys 0m0.560s 00:19:27.238 13:45:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:27.238 ************************************ 00:19:27.238 END TEST bdev_bounds 00:19:27.238 ************************************ 00:19:27.238 13:45:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:27.238 13:45:21 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:19:27.238 13:45:21 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:27.238 13:45:21 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:27.238 13:45:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:27.238 ************************************ 00:19:27.238 START TEST bdev_nbd 00:19:27.238 ************************************ 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71661 00:19:27.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71661 /var/tmp/spdk-nbd.sock 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 71661 ']' 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:27.238 13:45:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:27.497 [2024-11-06 13:45:21.263241] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:19:27.497 [2024-11-06 13:45:21.264145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.497 [2024-11-06 13:45:21.456751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.756 [2024-11-06 13:45:21.600344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:28.324 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.583 1+0 records in 00:19:28.583 1+0 records out 00:19:28.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563058 s, 7.3 MB/s 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:28.583 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.150 1+0 records in 00:19:29.150 1+0 records out 00:19:29.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643319 s, 6.4 MB/s 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.150 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:29.151 13:45:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:29.151 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.151 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.151 13:45:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:29.151 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:29.151 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.409 1+0 records in 00:19:29.409 1+0 records out 00:19:29.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594308 s, 6.9 MB/s 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.409 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.668 1+0 records in 00:19:29.668 1+0 records out 00:19:29.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773705 s, 5.3 MB/s 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.668 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:29.927 1+0 records in 00:19:29.927 1+0 records out 00:19:29.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00089201 s, 4.6 MB/s 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:29.927 13:45:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.186 1+0 records in 00:19:30.186 1+0 records out 00:19:30.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710256 s, 5.8 MB/s 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:30.186 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:30.753 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd0", 00:19:30.754 "bdev_name": "nvme0n1" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd1", 00:19:30.754 "bdev_name": "nvme1n1" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd2", 00:19:30.754 "bdev_name": "nvme2n1" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd3", 00:19:30.754 "bdev_name": "nvme2n2" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd4", 00:19:30.754 "bdev_name": "nvme2n3" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd5", 00:19:30.754 "bdev_name": "nvme3n1" 00:19:30.754 } 00:19:30.754 ]' 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd0", 00:19:30.754 "bdev_name": "nvme0n1" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd1", 00:19:30.754 "bdev_name": "nvme1n1" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd2", 00:19:30.754 "bdev_name": "nvme2n1" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd3", 00:19:30.754 "bdev_name": "nvme2n2" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd4", 00:19:30.754 "bdev_name": "nvme2n3" 00:19:30.754 }, 00:19:30.754 { 00:19:30.754 "nbd_device": "/dev/nbd5", 00:19:30.754 "bdev_name": "nvme3n1" 00:19:30.754 } 00:19:30.754 ]' 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.754 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.013 13:45:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.272 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.530 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.788 13:45:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.356 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.616 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:32.878 13:45:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:33.137 /dev/nbd0 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.137 1+0 records in 00:19:33.137 1+0 records out 00:19:33.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618041 s, 6.6 MB/s 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.137 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:19:33.705 /dev/nbd1 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.705 1+0 records in 00:19:33.705 1+0 records out 00:19:33.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803964 s, 5.1 MB/s 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.705 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:19:33.965 /dev/nbd10 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.965 1+0 records in 00:19:33.965 1+0 records out 00:19:33.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600393 s, 6.8 MB/s 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:33.965 13:45:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:19:34.224 /dev/nbd11 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.224 1+0 records in 00:19:34.224 1+0 records out 00:19:34.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667434 s, 6.1 MB/s 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.224 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:19:34.482 /dev/nbd12 00:19:34.482 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:34.482 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:34.482 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:19:34.482 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:34.482 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:34.482 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:34.482 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.741 1+0 records in 00:19:34.741 1+0 records out 00:19:34.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716098 s, 5.7 MB/s 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:34.741 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:35.000 /dev/nbd13 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:35.000 1+0 records in 00:19:35.000 1+0 records out 00:19:35.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604723 s, 6.8 MB/s 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:35.000 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:35.001 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:35.001 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.001 13:45:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:35.259 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:35.259 { 00:19:35.259 "nbd_device": "/dev/nbd0", 00:19:35.259 "bdev_name": "nvme0n1" 00:19:35.259 }, 00:19:35.259 { 00:19:35.259 "nbd_device": "/dev/nbd1", 00:19:35.259 "bdev_name": "nvme1n1" 00:19:35.259 }, 00:19:35.259 { 00:19:35.259 "nbd_device": "/dev/nbd10", 00:19:35.259 "bdev_name": "nvme2n1" 00:19:35.259 }, 00:19:35.259 { 00:19:35.259 "nbd_device": "/dev/nbd11", 00:19:35.259 "bdev_name": "nvme2n2" 00:19:35.259 }, 00:19:35.259 { 00:19:35.259 "nbd_device": "/dev/nbd12", 00:19:35.259 "bdev_name": "nvme2n3" 00:19:35.259 }, 00:19:35.260 { 00:19:35.260 "nbd_device": "/dev/nbd13", 00:19:35.260 "bdev_name": "nvme3n1" 00:19:35.260 } 00:19:35.260 ]' 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:35.260 { 00:19:35.260 "nbd_device": "/dev/nbd0", 00:19:35.260 "bdev_name": "nvme0n1" 00:19:35.260 }, 00:19:35.260 { 00:19:35.260 "nbd_device": "/dev/nbd1", 00:19:35.260 "bdev_name": "nvme1n1" 00:19:35.260 }, 00:19:35.260 { 00:19:35.260 "nbd_device": "/dev/nbd10", 00:19:35.260 "bdev_name": "nvme2n1" 00:19:35.260 }, 00:19:35.260 { 00:19:35.260 "nbd_device": "/dev/nbd11", 00:19:35.260 "bdev_name": "nvme2n2" 00:19:35.260 }, 00:19:35.260 { 00:19:35.260 "nbd_device": "/dev/nbd12", 00:19:35.260 "bdev_name": "nvme2n3" 00:19:35.260 }, 00:19:35.260 { 00:19:35.260 "nbd_device": "/dev/nbd13", 00:19:35.260 "bdev_name": "nvme3n1" 00:19:35.260 } 00:19:35.260 ]' 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:35.260 /dev/nbd1 00:19:35.260 /dev/nbd10 00:19:35.260 /dev/nbd11 00:19:35.260 /dev/nbd12 00:19:35.260 /dev/nbd13' 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:35.260 /dev/nbd1 00:19:35.260 /dev/nbd10 00:19:35.260 /dev/nbd11 00:19:35.260 /dev/nbd12 00:19:35.260 /dev/nbd13' 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:35.260 256+0 records in 00:19:35.260 256+0 records out 00:19:35.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112255 s, 93.4 MB/s 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.260 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:35.519 256+0 records in 00:19:35.519 256+0 records out 00:19:35.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121793 s, 8.6 MB/s 00:19:35.519 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.519 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:35.519 256+0 records in 00:19:35.519 256+0 records out 00:19:35.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151134 s, 6.9 MB/s 00:19:35.519 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.519 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:35.856 256+0 records in 00:19:35.856 256+0 records out 00:19:35.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129057 s, 8.1 MB/s 00:19:35.856 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.856 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:35.856 256+0 records in 00:19:35.856 256+0 records out 00:19:35.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129194 s, 8.1 MB/s 00:19:35.856 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:35.856 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:36.113 256+0 records in 00:19:36.113 256+0 records out 00:19:36.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130379 s, 8.0 MB/s 00:19:36.113 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:36.113 256+0 records in 00:19:36.113 256+0 records out 00:19:36.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131455 s, 8.0 MB/s 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.113 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.680 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.939 13:45:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.197 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.456 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.023 13:45:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:38.282 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:38.541 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:38.800 malloc_lvol_verify 00:19:38.801 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:39.059 d4c9387b-6ff3-4502-b13b-cc1de3903dec 00:19:39.059 13:45:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:39.625 ac810bdd-0d01-4f38-ab84-3e3de2056d6a 00:19:39.625 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:39.884 /dev/nbd0 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:39.884 mke2fs 1.47.0 (5-Feb-2023) 00:19:39.884 Discarding device blocks: 0/4096 done 00:19:39.884 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:39.884 00:19:39.884 Allocating group tables: 0/1 done 00:19:39.884 Writing inode tables: 0/1 done 00:19:39.884 Creating journal (1024 blocks): done 00:19:39.884 Writing superblocks and filesystem accounting information: 0/1 done 00:19:39.884 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:39.884 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:40.143 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.143 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.143 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.143 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.143 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.143 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.143 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:40.144 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.144 13:45:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71661 00:19:40.144 13:45:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 71661 ']' 00:19:40.144 13:45:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 71661 00:19:40.144 13:45:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:19:40.144 13:45:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.144 13:45:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71661 00:19:40.144 13:45:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:40.144 13:45:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:40.144 killing process with pid 71661 00:19:40.144 13:45:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71661' 00:19:40.144 13:45:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 71661 00:19:40.144 13:45:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 71661 00:19:42.045 13:45:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:42.045 00:19:42.045 real 0m14.493s 00:19:42.045 user 0m19.276s 00:19:42.045 sys 0m6.053s 00:19:42.045 13:45:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:42.045 ************************************ 00:19:42.045 END TEST bdev_nbd 00:19:42.045 ************************************ 00:19:42.045 13:45:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:42.045 13:45:35 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:42.045 13:45:35 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:42.045 13:45:35 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:42.045 13:45:35 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:42.045 13:45:35 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:42.045 13:45:35 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:42.045 13:45:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.045 ************************************ 00:19:42.045 START TEST bdev_fio 00:19:42.045 ************************************ 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:42.045 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:19:42.045 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:42.046 ************************************ 00:19:42.046 START TEST bdev_fio_rw_verify 00:19:42.046 ************************************ 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:42.046 13:45:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:42.305 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:42.305 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:42.305 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:42.305 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:42.305 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:42.305 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:42.305 fio-3.35 00:19:42.305 Starting 6 threads 00:19:54.511 00:19:54.511 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72106: Wed Nov 6 13:45:47 2024 00:19:54.511 read: IOPS=29.7k, BW=116MiB/s (122MB/s)(1162MiB/10001msec) 00:19:54.511 slat (usec): min=2, max=2379, avg= 8.30, stdev= 7.45 00:19:54.511 clat (usec): min=117, max=1347.5k, avg=608.76, stdev=6986.67 00:19:54.511 lat (usec): min=119, max=1347.6k, avg=617.05, stdev=6986.72 00:19:54.511 clat percentiles (usec): 00:19:54.511 | 50.000th=[ 553], 99.000th=[ 1254], 99.900th=[ 1876], 00:19:54.511 | 99.990th=[ 3884], 99.999th=[1350566] 00:19:54.511 write: IOPS=30.0k, BW=117MiB/s (123MB/s)(1174MiB/10001msec); 0 zone resets 00:19:54.511 slat (usec): min=11, max=3615, avg=29.66, stdev=41.05 00:19:54.511 clat (usec): min=94, max=27905, avg=716.07, stdev=374.94 00:19:54.511 lat (usec): min=113, max=27930, avg=745.74, stdev=378.89 00:19:54.511 clat percentiles (usec): 00:19:54.511 | 50.000th=[ 676], 99.000th=[ 1795], 99.900th=[ 4015], 99.990th=[ 6325], 00:19:54.511 | 99.999th=[26870] 00:19:54.511 bw ( KiB/s): min=85150, max=158680, per=100.00%, avg=121391.72, stdev=3494.27, samples=112 00:19:54.511 iops : min=21286, max=39670, avg=30347.49, stdev=873.62, samples=112 00:19:54.511 lat (usec) : 100=0.01%, 250=4.95%, 500=28.33%, 750=36.16%, 1000=21.61% 00:19:54.511 lat (msec) : 2=8.54%, 4=0.35%, 10=0.05%, 20=0.01%, 50=0.01% 00:19:54.511 lat (msec) : 2000=0.01% 00:19:54.511 cpu : usr=55.61%, sys=28.58%, ctx=6624, majf=0, minf=25180 00:19:54.511 IO depths : 1=11.5%, 2=23.7%, 4=51.2%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.511 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.511 issued rwts: total=297455,300439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.511 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:54.511 00:19:54.511 Run status group 0 (all jobs): 00:19:54.511 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1162MiB (1218MB), run=10001-10001msec 00:19:54.511 WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=1174MiB (1231MB), run=10001-10001msec 00:19:55.079 ----------------------------------------------------- 00:19:55.079 Suppressions used: 00:19:55.079 count bytes template 00:19:55.079 6 48 /usr/src/fio/parse.c 00:19:55.079 2760 264960 /usr/src/fio/iolog.c 00:19:55.079 1 8 libtcmalloc_minimal.so 00:19:55.079 1 904 libcrypto.so 00:19:55.079 ----------------------------------------------------- 00:19:55.079 00:19:55.079 00:19:55.079 real 0m13.086s 00:19:55.079 user 0m35.744s 00:19:55.079 sys 0m17.725s 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:55.079 ************************************ 00:19:55.079 END TEST bdev_fio_rw_verify 00:19:55.079 ************************************ 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:19:55.079 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:19:55.080 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:19:55.080 13:45:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:19:55.080 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:55.080 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "df38f8ae-3973-415a-bb97-34d164a84d19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "df38f8ae-3973-415a-bb97-34d164a84d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "eebf42c9-10a2-445f-8011-30c8aed21738"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "eebf42c9-10a2-445f-8011-30c8aed21738",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "934c363a-f703-464e-80ab-3ce0771b9274"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "934c363a-f703-464e-80ab-3ce0771b9274",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f1b6a21c-7389-4375-903a-8b7ce67c6d50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1b6a21c-7389-4375-903a-8b7ce67c6d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8d73fdc5-67f1-4a9e-a751-4f06a368a357"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d73fdc5-67f1-4a9e-a751-4f06a368a357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "51ad4151-dd29-468b-a8eb-02511994467c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "51ad4151-dd29-468b-a8eb-02511994467c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:55.080 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:55.080 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:55.080 /home/vagrant/spdk_repo/spdk 00:19:55.080 ************************************ 00:19:55.080 END TEST bdev_fio 00:19:55.080 ************************************ 00:19:55.080 13:45:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:55.080 13:45:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:55.080 13:45:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:55.080 00:19:55.080 real 0m13.307s 00:19:55.080 user 0m35.850s 00:19:55.080 sys 0m17.843s 00:19:55.080 13:45:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:55.080 13:45:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:55.080 13:45:49 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:55.080 13:45:49 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:55.080 13:45:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:19:55.080 13:45:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:55.080 13:45:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.341 ************************************ 00:19:55.341 START TEST bdev_verify 00:19:55.341 ************************************ 00:19:55.341 13:45:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:55.341 [2024-11-06 13:45:49.192254] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:19:55.341 [2024-11-06 13:45:49.192463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72284 ] 00:19:55.599 [2024-11-06 13:45:49.403200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:55.856 [2024-11-06 13:45:49.611529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.856 [2024-11-06 13:45:49.611563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.422 Running I/O for 5 seconds... 00:19:58.366 24192.00 IOPS, 94.50 MiB/s [2024-11-06T13:45:53.725Z] 23472.00 IOPS, 91.69 MiB/s [2024-11-06T13:45:54.660Z] 23893.33 IOPS, 93.33 MiB/s [2024-11-06T13:45:55.640Z] 23616.00 IOPS, 92.25 MiB/s 00:20:01.657 Latency(us) 00:20:01.657 [2024-11-06T13:45:55.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.657 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x0 length 0xa0000 00:20:01.657 nvme0n1 : 5.04 1803.61 7.05 0.00 0.00 70848.26 14293.09 62165.58 00:20:01.657 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0xa0000 length 0xa0000 00:20:01.657 nvme0n1 : 5.02 1734.25 6.77 0.00 0.00 73681.50 9799.19 80390.83 00:20:01.657 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x0 length 0xbd0bd 00:20:01.657 nvme1n1 : 5.05 3040.66 11.88 0.00 0.00 41849.41 5055.63 56922.70 00:20:01.657 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:01.657 nvme1n1 : 5.05 3003.94 11.73 0.00 0.00 42374.34 4774.77 59918.63 00:20:01.657 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x0 length 0x80000 00:20:01.657 nvme2n1 : 5.06 1822.96 7.12 0.00 0.00 69710.42 9736.78 68906.42 00:20:01.657 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x80000 length 0x80000 00:20:01.657 nvme2n1 : 5.05 1747.52 6.83 0.00 0.00 72635.38 6210.32 68407.10 00:20:01.657 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x0 length 0x80000 00:20:01.657 nvme2n2 : 5.06 1820.45 7.11 0.00 0.00 69701.21 5461.33 60417.95 00:20:01.657 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x80000 length 0x80000 00:20:01.657 nvme2n2 : 5.05 1748.55 6.83 0.00 0.00 72440.28 10985.08 64412.53 00:20:01.657 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x0 length 0x80000 00:20:01.657 nvme2n3 : 5.06 1819.91 7.11 0.00 0.00 69606.56 6085.49 64911.85 00:20:01.657 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x80000 length 0x80000 00:20:01.657 nvme2n3 : 5.06 1745.24 6.82 0.00 0.00 72462.84 7614.66 69405.74 00:20:01.657 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x0 length 0x20000 00:20:01.657 nvme3n1 : 5.06 1821.32 7.11 0.00 0.00 69438.73 5274.09 71902.35 00:20:01.657 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.657 Verification LBA range: start 0x20000 length 0x20000 00:20:01.657 nvme3n1 : 5.07 1768.72 6.91 0.00 0.00 71424.28 1388.74 80390.83 00:20:01.657 [2024-11-06T13:45:55.640Z] =================================================================================================================== 00:20:01.657 [2024-11-06T13:45:55.640Z] Total : 23877.14 93.27 0.00 0.00 63811.79 1388.74 80390.83 00:20:03.033 ************************************ 00:20:03.033 END TEST bdev_verify 00:20:03.033 ************************************ 00:20:03.033 00:20:03.033 real 0m7.691s 00:20:03.033 user 0m11.865s 00:20:03.033 sys 0m2.125s 00:20:03.033 13:45:56 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:03.033 13:45:56 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:03.033 13:45:56 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:03.033 13:45:56 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:03.033 13:45:56 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:03.033 13:45:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.033 ************************************ 00:20:03.033 START TEST bdev_verify_big_io 00:20:03.033 ************************************ 00:20:03.033 13:45:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:03.034 [2024-11-06 13:45:56.957986] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:20:03.034 [2024-11-06 13:45:56.958217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72388 ] 00:20:03.293 [2024-11-06 13:45:57.137193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:03.293 [2024-11-06 13:45:57.268401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.293 [2024-11-06 13:45:57.268447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.227 Running I/O for 5 seconds... 00:20:10.799 336.00 IOPS, 21.00 MiB/s [2024-11-06T13:46:04.782Z] 3112.00 IOPS, 194.50 MiB/s 00:20:10.799 Latency(us) 00:20:10.799 [2024-11-06T13:46:04.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.799 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x0 length 0xa000 00:20:10.799 nvme0n1 : 5.63 125.08 7.82 0.00 0.00 969958.29 11172.33 1430057.94 00:20:10.799 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0xa000 length 0xa000 00:20:10.799 nvme0n1 : 6.23 118.18 7.39 0.00 0.00 1040189.97 147799.28 1430057.94 00:20:10.799 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x0 length 0xbd0b 00:20:10.799 nvme1n1 : 6.20 162.15 10.13 0.00 0.00 701290.91 108852.18 1158426.82 00:20:10.799 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:10.799 nvme1n1 : 6.28 160.43 10.03 0.00 0.00 731945.23 45937.62 1102502.77 00:20:10.799 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x0 length 0x8000 00:20:10.799 nvme2n1 : 6.22 113.22 7.08 0.00 0.00 987890.35 12670.29 1661743.30 00:20:10.799 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x8000 length 0x8000 00:20:10.799 nvme2n1 : 6.28 122.21 7.64 0.00 0.00 911588.69 48434.22 1102502.77 00:20:10.799 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x0 length 0x8000 00:20:10.799 nvme2n2 : 6.23 131.02 8.19 0.00 0.00 838051.47 7989.15 910763.15 00:20:10.799 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x8000 length 0x8000 00:20:10.799 nvme2n2 : 6.28 132.58 8.29 0.00 0.00 812263.51 46187.28 1238318.32 00:20:10.799 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x0 length 0x8000 00:20:10.799 nvme2n3 : 6.22 151.69 9.48 0.00 0.00 695278.18 2824.29 1002638.38 00:20:10.799 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x8000 length 0x8000 00:20:10.799 nvme2n3 : 6.29 109.44 6.84 0.00 0.00 961050.24 42692.02 2013265.92 00:20:10.799 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x0 length 0x2000 00:20:10.799 nvme3n1 : 6.23 118.23 7.39 0.00 0.00 852683.66 8925.38 2173048.93 00:20:10.799 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:10.799 Verification LBA range: start 0x2000 length 0x2000 00:20:10.799 nvme3n1 : 6.29 119.55 7.47 0.00 0.00 843192.24 2418.59 1773591.41 00:20:10.799 [2024-11-06T13:46:04.782Z] =================================================================================================================== 00:20:10.799 [2024-11-06T13:46:04.782Z] Total : 1563.77 97.74 0.00 0.00 848582.52 2418.59 2173048.93 00:20:11.736 ************************************ 00:20:11.736 END TEST bdev_verify_big_io 00:20:11.736 00:20:11.736 real 0m8.848s 00:20:11.736 user 0m16.050s 00:20:11.736 sys 0m0.619s 00:20:11.736 13:46:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:11.736 13:46:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.736 ************************************ 00:20:11.736 13:46:05 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:11.736 13:46:05 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:11.736 13:46:05 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:11.736 13:46:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:11.995 ************************************ 00:20:11.995 START TEST bdev_write_zeroes 00:20:11.995 ************************************ 00:20:11.995 13:46:05 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:11.995 [2024-11-06 13:46:05.809963] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:20:11.995 [2024-11-06 13:46:05.810159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72508 ] 00:20:12.254 [2024-11-06 13:46:05.992830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.254 [2024-11-06 13:46:06.153364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.822 Running I/O for 1 seconds... 00:20:13.779 73888.00 IOPS, 288.62 MiB/s 00:20:13.779 Latency(us) 00:20:13.779 [2024-11-06T13:46:07.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.779 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:13.779 nvme0n1 : 1.02 10923.03 42.67 0.00 0.00 11707.63 7240.17 22719.15 00:20:13.779 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:13.779 nvme1n1 : 1.02 18652.37 72.86 0.00 0.00 6849.26 3963.37 15166.90 00:20:13.779 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:13.779 nvme2n1 : 1.03 10861.68 42.43 0.00 0.00 11713.33 5492.54 23842.62 00:20:13.779 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:13.779 nvme2n2 : 1.03 10850.88 42.39 0.00 0.00 11711.38 5149.26 24092.28 00:20:13.779 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:13.779 nvme2n3 : 1.03 10839.86 42.34 0.00 0.00 11714.54 5180.46 24466.77 00:20:13.779 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:13.779 nvme3n1 : 1.03 10828.68 42.30 0.00 0.00 11718.92 5305.30 24716.43 00:20:13.779 [2024-11-06T13:46:07.762Z] =================================================================================================================== 00:20:13.779 [2024-11-06T13:46:07.762Z] Total : 72956.50 284.99 0.00 0.00 10470.65 3963.37 24716.43 00:20:15.157 ************************************ 00:20:15.157 END TEST bdev_write_zeroes 00:20:15.157 ************************************ 00:20:15.157 00:20:15.157 real 0m3.264s 00:20:15.157 user 0m2.406s 00:20:15.157 sys 0m0.682s 00:20:15.157 13:46:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:15.157 13:46:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:15.157 13:46:09 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:15.157 13:46:09 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:15.157 13:46:09 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:15.157 13:46:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:15.157 ************************************ 00:20:15.157 START TEST bdev_json_nonenclosed 00:20:15.157 ************************************ 00:20:15.157 13:46:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:15.416 [2024-11-06 13:46:09.165279] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:20:15.416 [2024-11-06 13:46:09.165456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72570 ] 00:20:15.416 [2024-11-06 13:46:09.356987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.674 [2024-11-06 13:46:09.505232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.674 [2024-11-06 13:46:09.505361] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:15.674 [2024-11-06 13:46:09.505390] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:15.674 [2024-11-06 13:46:09.505406] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:15.933 00:20:15.933 real 0m0.748s 00:20:15.933 user 0m0.462s 00:20:15.933 sys 0m0.179s 00:20:15.933 13:46:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:15.933 ************************************ 00:20:15.933 END TEST bdev_json_nonenclosed 00:20:15.933 ************************************ 00:20:15.933 13:46:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:15.933 13:46:09 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:15.933 13:46:09 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:15.933 13:46:09 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:15.933 13:46:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:15.933 ************************************ 00:20:15.933 START TEST bdev_json_nonarray 00:20:15.933 ************************************ 00:20:15.933 13:46:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:16.191 [2024-11-06 13:46:09.978830] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:20:16.191 [2024-11-06 13:46:09.979001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72601 ] 00:20:16.449 [2024-11-06 13:46:10.175310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.449 [2024-11-06 13:46:10.319543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.450 [2024-11-06 13:46:10.319693] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:16.450 [2024-11-06 13:46:10.319724] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:16.450 [2024-11-06 13:46:10.319740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:16.708 ************************************ 00:20:16.708 END TEST bdev_json_nonarray 00:20:16.708 ************************************ 00:20:16.708 00:20:16.708 real 0m0.747s 00:20:16.708 user 0m0.450s 00:20:16.708 sys 0m0.190s 00:20:16.708 13:46:10 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:16.708 13:46:10 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:16.708 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:16.708 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:16.708 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:16.708 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:16.708 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:16.709 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:16.709 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:16.709 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:16.709 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:16.709 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:16.709 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:16.709 13:46:10 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:17.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:25.394 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:33.508 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:33.508 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:33.508 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:33.508 00:20:33.508 real 1m21.903s 00:20:33.508 user 1m48.325s 00:20:33.508 sys 1m16.581s 00:20:33.508 13:46:26 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:33.508 13:46:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.508 ************************************ 00:20:33.508 END TEST blockdev_xnvme 00:20:33.508 ************************************ 00:20:33.508 13:46:26 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:33.508 13:46:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:33.508 13:46:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:33.508 13:46:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.508 ************************************ 00:20:33.508 START TEST ublk 00:20:33.508 ************************************ 00:20:33.508 13:46:26 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:33.508 * Looking for test storage... 00:20:33.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:33.508 13:46:26 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:33.508 13:46:26 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:20:33.508 13:46:26 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:33.508 13:46:26 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.508 13:46:26 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.508 13:46:26 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.508 13:46:26 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.508 13:46:26 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.508 13:46:26 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.508 13:46:26 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.508 13:46:26 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.508 13:46:26 ublk -- scripts/common.sh@344 -- # case "$op" in 00:20:33.508 13:46:26 ublk -- scripts/common.sh@345 -- # : 1 00:20:33.508 13:46:26 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.508 13:46:26 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.508 13:46:26 ublk -- scripts/common.sh@365 -- # decimal 1 00:20:33.508 13:46:26 ublk -- scripts/common.sh@353 -- # local d=1 00:20:33.508 13:46:26 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.508 13:46:26 ublk -- scripts/common.sh@355 -- # echo 1 00:20:33.508 13:46:26 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.508 13:46:26 ublk -- scripts/common.sh@366 -- # decimal 2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@353 -- # local d=2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.508 13:46:26 ublk -- scripts/common.sh@355 -- # echo 2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.508 13:46:26 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.508 13:46:26 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.508 13:46:26 ublk -- scripts/common.sh@368 -- # return 0 00:20:33.508 13:46:26 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.508 13:46:26 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:33.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.508 --rc genhtml_branch_coverage=1 00:20:33.508 --rc genhtml_function_coverage=1 00:20:33.508 --rc genhtml_legend=1 00:20:33.508 --rc geninfo_all_blocks=1 00:20:33.508 --rc geninfo_unexecuted_blocks=1 00:20:33.508 00:20:33.508 ' 00:20:33.509 13:46:26 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:33.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.509 --rc genhtml_branch_coverage=1 00:20:33.509 --rc genhtml_function_coverage=1 00:20:33.509 --rc genhtml_legend=1 00:20:33.509 --rc geninfo_all_blocks=1 00:20:33.509 --rc geninfo_unexecuted_blocks=1 00:20:33.509 00:20:33.509 ' 00:20:33.509 13:46:26 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:33.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.509 --rc genhtml_branch_coverage=1 00:20:33.509 --rc genhtml_function_coverage=1 00:20:33.509 --rc genhtml_legend=1 00:20:33.509 --rc geninfo_all_blocks=1 00:20:33.509 --rc geninfo_unexecuted_blocks=1 00:20:33.509 00:20:33.509 ' 00:20:33.509 13:46:26 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:33.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.509 --rc genhtml_branch_coverage=1 00:20:33.509 --rc genhtml_function_coverage=1 00:20:33.509 --rc genhtml_legend=1 00:20:33.509 --rc geninfo_all_blocks=1 00:20:33.509 --rc geninfo_unexecuted_blocks=1 00:20:33.509 00:20:33.509 ' 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:33.509 13:46:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:33.509 13:46:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:33.509 13:46:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:33.509 13:46:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:33.509 13:46:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:33.509 13:46:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:33.509 13:46:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:33.509 13:46:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:33.509 13:46:26 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:33.509 13:46:26 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:33.509 13:46:26 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:33.509 13:46:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:33.509 ************************************ 00:20:33.509 START TEST test_save_ublk_config 00:20:33.509 ************************************ 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72916 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72916 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72916 ']' 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.509 13:46:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:33.509 [2024-11-06 13:46:27.162043] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:20:33.509 [2024-11-06 13:46:27.162450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72916 ] 00:20:33.509 [2024-11-06 13:46:27.353586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.509 [2024-11-06 13:46:27.470702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.446 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:34.446 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:20:34.446 13:46:28 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:34.446 13:46:28 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:34.446 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.446 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:34.446 [2024-11-06 13:46:28.402049] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:34.446 [2024-11-06 13:46:28.403270] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:34.707 malloc0 00:20:34.707 [2024-11-06 13:46:28.491183] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:34.707 [2024-11-06 13:46:28.491281] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:34.707 [2024-11-06 13:46:28.491295] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:34.707 [2024-11-06 13:46:28.491304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:34.707 [2024-11-06 13:46:28.499077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:34.707 [2024-11-06 13:46:28.499105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:34.707 [2024-11-06 13:46:28.507050] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:34.707 [2024-11-06 13:46:28.507161] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:34.707 [2024-11-06 13:46:28.531067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:34.707 0 00:20:34.707 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.707 13:46:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:34.707 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.707 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:34.966 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.966 13:46:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:34.966 "subsystems": [ 00:20:34.966 { 00:20:34.966 "subsystem": "fsdev", 00:20:34.966 "config": [ 00:20:34.966 { 00:20:34.966 "method": "fsdev_set_opts", 00:20:34.966 "params": { 00:20:34.966 "fsdev_io_pool_size": 65535, 00:20:34.966 "fsdev_io_cache_size": 256 00:20:34.966 } 00:20:34.966 } 00:20:34.966 ] 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "subsystem": "keyring", 00:20:34.966 "config": [] 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "subsystem": "iobuf", 00:20:34.966 "config": [ 00:20:34.966 { 00:20:34.966 "method": "iobuf_set_options", 00:20:34.966 "params": { 00:20:34.966 "small_pool_count": 8192, 00:20:34.966 "large_pool_count": 1024, 00:20:34.966 "small_bufsize": 8192, 00:20:34.966 "large_bufsize": 135168, 00:20:34.966 "enable_numa": false 00:20:34.966 } 00:20:34.966 } 00:20:34.966 ] 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "subsystem": "sock", 00:20:34.966 "config": [ 00:20:34.966 { 00:20:34.966 "method": "sock_set_default_impl", 00:20:34.966 "params": { 00:20:34.966 "impl_name": "posix" 00:20:34.966 } 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "method": "sock_impl_set_options", 00:20:34.966 "params": { 00:20:34.966 "impl_name": "ssl", 00:20:34.966 "recv_buf_size": 4096, 00:20:34.966 "send_buf_size": 4096, 00:20:34.966 "enable_recv_pipe": true, 00:20:34.966 "enable_quickack": false, 00:20:34.966 "enable_placement_id": 0, 00:20:34.966 "enable_zerocopy_send_server": true, 00:20:34.966 "enable_zerocopy_send_client": false, 00:20:34.966 "zerocopy_threshold": 0, 00:20:34.966 "tls_version": 0, 00:20:34.966 "enable_ktls": false 00:20:34.966 } 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "method": "sock_impl_set_options", 00:20:34.966 "params": { 00:20:34.966 "impl_name": "posix", 00:20:34.966 "recv_buf_size": 2097152, 00:20:34.966 "send_buf_size": 2097152, 00:20:34.966 "enable_recv_pipe": true, 00:20:34.966 "enable_quickack": false, 00:20:34.966 "enable_placement_id": 0, 00:20:34.966 "enable_zerocopy_send_server": true, 00:20:34.966 "enable_zerocopy_send_client": false, 00:20:34.966 "zerocopy_threshold": 0, 00:20:34.966 "tls_version": 0, 00:20:34.966 "enable_ktls": false 00:20:34.966 } 00:20:34.966 } 00:20:34.966 ] 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "subsystem": "vmd", 00:20:34.966 "config": [] 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "subsystem": "accel", 00:20:34.966 "config": [ 00:20:34.966 { 00:20:34.966 "method": "accel_set_options", 00:20:34.966 "params": { 00:20:34.966 "small_cache_size": 128, 00:20:34.966 "large_cache_size": 16, 00:20:34.966 "task_count": 2048, 00:20:34.966 "sequence_count": 2048, 00:20:34.966 "buf_count": 2048 00:20:34.966 } 00:20:34.966 } 00:20:34.966 ] 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "subsystem": "bdev", 00:20:34.966 "config": [ 00:20:34.966 { 00:20:34.966 "method": "bdev_set_options", 00:20:34.966 "params": { 00:20:34.966 "bdev_io_pool_size": 65535, 00:20:34.966 "bdev_io_cache_size": 256, 00:20:34.966 "bdev_auto_examine": true, 00:20:34.966 "iobuf_small_cache_size": 128, 00:20:34.966 "iobuf_large_cache_size": 16 00:20:34.966 } 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "method": "bdev_raid_set_options", 00:20:34.966 "params": { 00:20:34.966 "process_window_size_kb": 1024, 00:20:34.966 "process_max_bandwidth_mb_sec": 0 00:20:34.966 } 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "method": "bdev_iscsi_set_options", 00:20:34.966 "params": { 00:20:34.966 "timeout_sec": 30 00:20:34.966 } 00:20:34.966 }, 00:20:34.966 { 00:20:34.966 "method": "bdev_nvme_set_options", 00:20:34.966 "params": { 00:20:34.966 "action_on_timeout": "none", 00:20:34.966 "timeout_us": 0, 00:20:34.966 "timeout_admin_us": 0, 00:20:34.966 "keep_alive_timeout_ms": 10000, 00:20:34.966 "arbitration_burst": 0, 00:20:34.966 "low_priority_weight": 0, 00:20:34.966 "medium_priority_weight": 0, 00:20:34.966 "high_priority_weight": 0, 00:20:34.966 "nvme_adminq_poll_period_us": 10000, 00:20:34.966 "nvme_ioq_poll_period_us": 0, 00:20:34.966 "io_queue_requests": 0, 00:20:34.967 "delay_cmd_submit": true, 00:20:34.967 "transport_retry_count": 4, 00:20:34.967 "bdev_retry_count": 3, 00:20:34.967 "transport_ack_timeout": 0, 00:20:34.967 "ctrlr_loss_timeout_sec": 0, 00:20:34.967 "reconnect_delay_sec": 0, 00:20:34.967 "fast_io_fail_timeout_sec": 0, 00:20:34.967 "disable_auto_failback": false, 00:20:34.967 "generate_uuids": false, 00:20:34.967 "transport_tos": 0, 00:20:34.967 "nvme_error_stat": false, 00:20:34.967 "rdma_srq_size": 0, 00:20:34.967 "io_path_stat": false, 00:20:34.967 "allow_accel_sequence": false, 00:20:34.967 "rdma_max_cq_size": 0, 00:20:34.967 "rdma_cm_event_timeout_ms": 0, 00:20:34.967 "dhchap_digests": [ 00:20:34.967 "sha256", 00:20:34.967 "sha384", 00:20:34.967 "sha512" 00:20:34.967 ], 00:20:34.967 "dhchap_dhgroups": [ 00:20:34.967 "null", 00:20:34.967 "ffdhe2048", 00:20:34.967 "ffdhe3072", 00:20:34.967 "ffdhe4096", 00:20:34.967 "ffdhe6144", 00:20:34.967 "ffdhe8192" 00:20:34.967 ] 00:20:34.967 } 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "method": "bdev_nvme_set_hotplug", 00:20:34.967 "params": { 00:20:34.967 "period_us": 100000, 00:20:34.967 "enable": false 00:20:34.967 } 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "method": "bdev_malloc_create", 00:20:34.967 "params": { 00:20:34.967 "name": "malloc0", 00:20:34.967 "num_blocks": 8192, 00:20:34.967 "block_size": 4096, 00:20:34.967 "physical_block_size": 4096, 00:20:34.967 "uuid": "741e2747-015f-47d8-bda5-501f9b7eeed2", 00:20:34.967 "optimal_io_boundary": 0, 00:20:34.967 "md_size": 0, 00:20:34.967 "dif_type": 0, 00:20:34.967 "dif_is_head_of_md": false, 00:20:34.967 "dif_pi_format": 0 00:20:34.967 } 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "method": "bdev_wait_for_examine" 00:20:34.967 } 00:20:34.967 ] 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "scsi", 00:20:34.967 "config": null 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "scheduler", 00:20:34.967 "config": [ 00:20:34.967 { 00:20:34.967 "method": "framework_set_scheduler", 00:20:34.967 "params": { 00:20:34.967 "name": "static" 00:20:34.967 } 00:20:34.967 } 00:20:34.967 ] 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "vhost_scsi", 00:20:34.967 "config": [] 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "vhost_blk", 00:20:34.967 "config": [] 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "ublk", 00:20:34.967 "config": [ 00:20:34.967 { 00:20:34.967 "method": "ublk_create_target", 00:20:34.967 "params": { 00:20:34.967 "cpumask": "1" 00:20:34.967 } 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "method": "ublk_start_disk", 00:20:34.967 "params": { 00:20:34.967 "bdev_name": "malloc0", 00:20:34.967 "ublk_id": 0, 00:20:34.967 "num_queues": 1, 00:20:34.967 "queue_depth": 128 00:20:34.967 } 00:20:34.967 } 00:20:34.967 ] 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "nbd", 00:20:34.967 "config": [] 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "nvmf", 00:20:34.967 "config": [ 00:20:34.967 { 00:20:34.967 "method": "nvmf_set_config", 00:20:34.967 "params": { 00:20:34.967 "discovery_filter": "match_any", 00:20:34.967 "admin_cmd_passthru": { 00:20:34.967 "identify_ctrlr": false 00:20:34.967 }, 00:20:34.967 "dhchap_digests": [ 00:20:34.967 "sha256", 00:20:34.967 "sha384", 00:20:34.967 "sha512" 00:20:34.967 ], 00:20:34.967 "dhchap_dhgroups": [ 00:20:34.967 "null", 00:20:34.967 "ffdhe2048", 00:20:34.967 "ffdhe3072", 00:20:34.967 "ffdhe4096", 00:20:34.967 "ffdhe6144", 00:20:34.967 "ffdhe8192" 00:20:34.967 ] 00:20:34.967 } 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "method": "nvmf_set_max_subsystems", 00:20:34.967 "params": { 00:20:34.967 "max_subsystems": 1024 00:20:34.967 } 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "method": "nvmf_set_crdt", 00:20:34.967 "params": { 00:20:34.967 "crdt1": 0, 00:20:34.967 "crdt2": 0, 00:20:34.967 "crdt3": 0 00:20:34.967 } 00:20:34.967 } 00:20:34.967 ] 00:20:34.967 }, 00:20:34.967 { 00:20:34.967 "subsystem": "iscsi", 00:20:34.967 "config": [ 00:20:34.967 { 00:20:34.967 "method": "iscsi_set_options", 00:20:34.967 "params": { 00:20:34.967 "node_base": "iqn.2016-06.io.spdk", 00:20:34.967 "max_sessions": 128, 00:20:34.967 "max_connections_per_session": 2, 00:20:34.967 "max_queue_depth": 64, 00:20:34.967 "default_time2wait": 2, 00:20:34.967 "default_time2retain": 20, 00:20:34.967 "first_burst_length": 8192, 00:20:34.967 "immediate_data": true, 00:20:34.967 "allow_duplicated_isid": false, 00:20:34.967 "error_recovery_level": 0, 00:20:34.967 "nop_timeout": 60, 00:20:34.967 "nop_in_interval": 30, 00:20:34.967 "disable_chap": false, 00:20:34.967 "require_chap": false, 00:20:34.967 "mutual_chap": false, 00:20:34.967 "chap_group": 0, 00:20:34.967 "max_large_datain_per_connection": 64, 00:20:34.967 "max_r2t_per_connection": 4, 00:20:34.967 "pdu_pool_size": 36864, 00:20:34.967 "immediate_data_pool_size": 16384, 00:20:34.967 "data_out_pool_size": 2048 00:20:34.967 } 00:20:34.967 } 00:20:34.967 ] 00:20:34.967 } 00:20:34.967 ] 00:20:34.967 }' 00:20:34.967 13:46:28 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72916 00:20:34.967 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72916 ']' 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72916 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72916 00:20:34.968 killing process with pid 72916 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72916' 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72916 00:20:34.968 13:46:28 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72916 00:20:36.870 [2024-11-06 13:46:30.434944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:36.870 [2024-11-06 13:46:30.474110] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:36.870 [2024-11-06 13:46:30.474327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:36.870 [2024-11-06 13:46:30.483070] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:36.870 [2024-11-06 13:46:30.483136] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:36.870 [2024-11-06 13:46:30.483158] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:36.870 [2024-11-06 13:46:30.483192] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:36.870 [2024-11-06 13:46:30.483380] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:39.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72993 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72993 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72993 ']' 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:39.416 13:46:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:39.416 "subsystems": [ 00:20:39.416 { 00:20:39.416 "subsystem": "fsdev", 00:20:39.416 "config": [ 00:20:39.416 { 00:20:39.416 "method": "fsdev_set_opts", 00:20:39.416 "params": { 00:20:39.416 "fsdev_io_pool_size": 65535, 00:20:39.416 "fsdev_io_cache_size": 256 00:20:39.416 } 00:20:39.416 } 00:20:39.416 ] 00:20:39.416 }, 00:20:39.416 { 00:20:39.416 "subsystem": "keyring", 00:20:39.416 "config": [] 00:20:39.416 }, 00:20:39.416 { 00:20:39.416 "subsystem": "iobuf", 00:20:39.416 "config": [ 00:20:39.416 { 00:20:39.416 "method": "iobuf_set_options", 00:20:39.416 "params": { 00:20:39.416 "small_pool_count": 8192, 00:20:39.416 "large_pool_count": 1024, 00:20:39.416 "small_bufsize": 8192, 00:20:39.416 "large_bufsize": 135168, 00:20:39.416 "enable_numa": false 00:20:39.416 } 00:20:39.416 } 00:20:39.416 ] 00:20:39.416 }, 00:20:39.416 { 00:20:39.417 "subsystem": "sock", 00:20:39.417 "config": [ 00:20:39.417 { 00:20:39.417 "method": "sock_set_default_impl", 00:20:39.417 "params": { 00:20:39.417 "impl_name": "posix" 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "sock_impl_set_options", 00:20:39.417 "params": { 00:20:39.417 "impl_name": "ssl", 00:20:39.417 "recv_buf_size": 4096, 00:20:39.417 "send_buf_size": 4096, 00:20:39.417 "enable_recv_pipe": true, 00:20:39.417 "enable_quickack": false, 00:20:39.417 "enable_placement_id": 0, 00:20:39.417 "enable_zerocopy_send_server": true, 00:20:39.417 "enable_zerocopy_send_client": false, 00:20:39.417 "zerocopy_threshold": 0, 00:20:39.417 "tls_version": 0, 00:20:39.417 "enable_ktls": false 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "sock_impl_set_options", 00:20:39.417 "params": { 00:20:39.417 "impl_name": "posix", 00:20:39.417 "recv_buf_size": 2097152, 00:20:39.417 "send_buf_size": 2097152, 00:20:39.417 "enable_recv_pipe": true, 00:20:39.417 "enable_quickack": false, 00:20:39.417 "enable_placement_id": 0, 00:20:39.417 "enable_zerocopy_send_server": true, 00:20:39.417 "enable_zerocopy_send_client": false, 00:20:39.417 "zerocopy_threshold": 0, 00:20:39.417 "tls_version": 0, 00:20:39.417 "enable_ktls": false 00:20:39.417 } 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "vmd", 00:20:39.417 "config": [] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "accel", 00:20:39.417 "config": [ 00:20:39.417 { 00:20:39.417 "method": "accel_set_options", 00:20:39.417 "params": { 00:20:39.417 "small_cache_size": 128, 00:20:39.417 "large_cache_size": 16, 00:20:39.417 "task_count": 2048, 00:20:39.417 "sequence_count": 2048, 00:20:39.417 "buf_count": 2048 00:20:39.417 } 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "bdev", 00:20:39.417 "config": [ 00:20:39.417 { 00:20:39.417 "method": "bdev_set_options", 00:20:39.417 "params": { 00:20:39.417 "bdev_io_pool_size": 65535, 00:20:39.417 "bdev_io_cache_size": 256, 00:20:39.417 "bdev_auto_examine": true, 00:20:39.417 "iobuf_small_cache_size": 128, 00:20:39.417 "iobuf_large_cache_size": 16 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "bdev_raid_set_options", 00:20:39.417 "params": { 00:20:39.417 "process_window_size_kb": 1024, 00:20:39.417 "process_max_bandwidth_mb_sec": 0 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "bdev_iscsi_set_options", 00:20:39.417 "params": { 00:20:39.417 "timeout_sec": 30 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "bdev_nvme_set_options", 00:20:39.417 "params": { 00:20:39.417 "action_on_timeout": "none", 00:20:39.417 "timeout_us": 0, 00:20:39.417 "timeout_admin_us": 0, 00:20:39.417 "keep_alive_timeout_ms": 10000, 00:20:39.417 "arbitration_burst": 0, 00:20:39.417 "low_priority_weight": 0, 00:20:39.417 "medium_priority_weight": 0, 00:20:39.417 "high_priority_weight": 0, 00:20:39.417 "nvme_adminq_poll_period_us": 10000, 00:20:39.417 "nvme_ioq_poll_period_us": 0, 00:20:39.417 "io_queue_requests": 0, 00:20:39.417 "delay_cmd_submit": true, 00:20:39.417 "transport_retry_count": 4, 00:20:39.417 "bdev_retry_count": 3, 00:20:39.417 "transport_ack_timeout": 0, 00:20:39.417 "ctrlr_loss_timeout_sec": 0, 00:20:39.417 "reconnect_delay_sec": 0, 00:20:39.417 "fast_io_fail_timeout_sec": 0, 00:20:39.417 "disable_auto_failback": false, 00:20:39.417 "generate_uuids": false, 00:20:39.417 "transport_tos": 0, 00:20:39.417 "nvme_error_stat": false, 00:20:39.417 "rdma_srq_size": 0, 00:20:39.417 "io_path_stat": false, 00:20:39.417 "allow_accel_sequence": false, 00:20:39.417 "rdma_max_cq_size": 0, 00:20:39.417 "rdma_cm_event_timeout_ms": 0, 00:20:39.417 "dhchap_digests": [ 00:20:39.417 "sha256", 00:20:39.417 "sha384", 00:20:39.417 "sha512" 00:20:39.417 ], 00:20:39.417 "dhchap_dhgroups": [ 00:20:39.417 "null", 00:20:39.417 "ffdhe2048", 00:20:39.417 "ffdhe3072", 00:20:39.417 "ffdhe4096", 00:20:39.417 "ffdhe6144", 00:20:39.417 "ffdhe8192" 00:20:39.417 ] 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "bdev_nvme_set_hotplug", 00:20:39.417 "params": { 00:20:39.417 "period_us": 100000, 00:20:39.417 "enable": false 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "bdev_malloc_create", 00:20:39.417 "params": { 00:20:39.417 "name": "malloc0", 00:20:39.417 "num_blocks": 8192, 00:20:39.417 "block_size": 4096, 00:20:39.417 "physical_block_size": 4096, 00:20:39.417 "uuid": "741e2747-015f-47d8-bda5-501f9b7eeed2", 00:20:39.417 "optimal_io_boundary": 0, 00:20:39.417 "md_size": 0, 00:20:39.417 "dif_type": 0, 00:20:39.417 "dif_is_head_of_md": false, 00:20:39.417 "dif_pi_format": 0 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "bdev_wait_for_examine" 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "scsi", 00:20:39.417 "config": null 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "scheduler", 00:20:39.417 "config": [ 00:20:39.417 { 00:20:39.417 "method": "framework_set_scheduler", 00:20:39.417 "params": { 00:20:39.417 "name": "static" 00:20:39.417 } 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "vhost_scsi", 00:20:39.417 "config": [] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "vhost_blk", 00:20:39.417 "config": [] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "ublk", 00:20:39.417 "config": [ 00:20:39.417 { 00:20:39.417 "method": "ublk_create_target", 00:20:39.417 "params": { 00:20:39.417 "cpumask": "1" 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "ublk_start_disk", 00:20:39.417 "params": { 00:20:39.417 "bdev_name": "malloc0", 00:20:39.417 "ublk_id": 0, 00:20:39.417 "num_queues": 1, 00:20:39.417 "queue_depth": 128 00:20:39.417 } 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "nbd", 00:20:39.417 "config": [] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "nvmf", 00:20:39.417 "config": [ 00:20:39.417 { 00:20:39.417 "method": "nvmf_set_config", 00:20:39.417 "params": { 00:20:39.417 "discovery_filter": "match_any", 00:20:39.417 "admin_cmd_passthru": { 00:20:39.417 "identify_ctrlr": false 00:20:39.417 }, 00:20:39.417 "dhchap_digests": [ 00:20:39.417 "sha256", 00:20:39.417 "sha384", 00:20:39.417 "sha512" 00:20:39.417 ], 00:20:39.417 "dhchap_dhgroups": [ 00:20:39.417 "null", 00:20:39.417 "ffdhe2048", 00:20:39.417 "ffdhe3072", 00:20:39.417 "ffdhe4096", 00:20:39.417 "ffdhe6144", 00:20:39.417 "ffdhe8192" 00:20:39.417 ] 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "nvmf_set_max_subsystems", 00:20:39.417 "params": { 00:20:39.417 "max_subsystems": 1024 00:20:39.417 } 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "method": "nvmf_set_crdt", 00:20:39.417 "params": { 00:20:39.417 "crdt1": 0, 00:20:39.417 "crdt2": 0, 00:20:39.417 "crdt3": 0 00:20:39.417 } 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 }, 00:20:39.417 { 00:20:39.417 "subsystem": "iscsi", 00:20:39.417 "config": [ 00:20:39.417 { 00:20:39.417 "method": "iscsi_set_options", 00:20:39.417 "params": { 00:20:39.417 "node_base": "iqn.2016-06.io.spdk", 00:20:39.417 "max_sessions": 128, 00:20:39.417 "max_connections_per_session": 2, 00:20:39.417 "max_queue_depth": 64, 00:20:39.417 "default_time2wait": 2, 00:20:39.417 "default_time2retain": 20, 00:20:39.417 "first_burst_length": 8192, 00:20:39.417 "immediate_data": true, 00:20:39.417 "allow_duplicated_isid": false, 00:20:39.417 "error_recovery_level": 0, 00:20:39.417 "nop_timeout": 60, 00:20:39.417 "nop_in_interval": 30, 00:20:39.417 "disable_chap": false, 00:20:39.417 "require_chap": false, 00:20:39.417 "mutual_chap": false, 00:20:39.417 "chap_group": 0, 00:20:39.417 "max_large_datain_per_connection": 64, 00:20:39.417 "max_r2t_per_connection": 4, 00:20:39.417 "pdu_pool_size": 36864, 00:20:39.417 "immediate_data_pool_size": 16384, 00:20:39.417 "data_out_pool_size": 2048 00:20:39.417 } 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 } 00:20:39.417 ] 00:20:39.417 }' 00:20:39.417 [2024-11-06 13:46:32.974332] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:20:39.418 [2024-11-06 13:46:32.974551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72993 ] 00:20:39.418 [2024-11-06 13:46:33.168795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.418 [2024-11-06 13:46:33.318820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.815 [2024-11-06 13:46:34.593046] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:40.815 [2024-11-06 13:46:34.594445] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:40.815 [2024-11-06 13:46:34.601189] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:40.815 [2024-11-06 13:46:34.601283] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:40.815 [2024-11-06 13:46:34.601298] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:40.815 [2024-11-06 13:46:34.601307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.815 [2024-11-06 13:46:34.610154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.815 [2024-11-06 13:46:34.610181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.815 [2024-11-06 13:46:34.617055] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.815 [2024-11-06 13:46:34.617159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:40.815 [2024-11-06 13:46:34.634046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72993 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72993 ']' 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72993 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72993 00:20:40.815 killing process with pid 72993 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72993' 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72993 00:20:40.815 13:46:34 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72993 00:20:42.717 [2024-11-06 13:46:36.459924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:42.717 [2024-11-06 13:46:36.492145] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:42.717 [2024-11-06 13:46:36.492290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:42.717 [2024-11-06 13:46:36.502061] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:42.717 [2024-11-06 13:46:36.502126] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:42.717 [2024-11-06 13:46:36.502136] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:42.717 [2024-11-06 13:46:36.502168] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:42.717 [2024-11-06 13:46:36.502351] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:44.621 13:46:38 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:44.621 ************************************ 00:20:44.621 END TEST test_save_ublk_config 00:20:44.621 ************************************ 00:20:44.621 00:20:44.621 real 0m11.539s 00:20:44.621 user 0m8.817s 00:20:44.621 sys 0m3.627s 00:20:44.621 13:46:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:44.621 13:46:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:44.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.622 13:46:38 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73084 00:20:44.622 13:46:38 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.622 13:46:38 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73084 00:20:44.622 13:46:38 ublk -- common/autotest_common.sh@833 -- # '[' -z 73084 ']' 00:20:44.622 13:46:38 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:44.622 13:46:38 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.622 13:46:38 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.622 13:46:38 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.622 13:46:38 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.622 13:46:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:44.880 [2024-11-06 13:46:38.734997] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:20:44.880 [2024-11-06 13:46:38.735189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73084 ] 00:20:45.140 [2024-11-06 13:46:38.919710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:45.140 [2024-11-06 13:46:39.068138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.140 [2024-11-06 13:46:39.068165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.518 13:46:40 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.518 13:46:40 ublk -- common/autotest_common.sh@866 -- # return 0 00:20:46.518 13:46:40 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:46.518 13:46:40 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:46.518 13:46:40 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:46.518 13:46:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.518 ************************************ 00:20:46.518 START TEST test_create_ublk 00:20:46.518 ************************************ 00:20:46.518 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:20:46.518 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:46.518 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.518 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.518 [2024-11-06 13:46:40.139053] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:46.518 [2024-11-06 13:46:40.142526] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:46.518 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.518 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:46.518 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:46.518 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.518 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.777 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:46.777 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.777 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.777 [2024-11-06 13:46:40.509239] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:46.777 [2024-11-06 13:46:40.509787] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:46.777 [2024-11-06 13:46:40.509804] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:46.777 [2024-11-06 13:46:40.509814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:46.777 [2024-11-06 13:46:40.521533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:46.777 [2024-11-06 13:46:40.525037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:46.777 [2024-11-06 13:46:40.534073] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:46.777 [2024-11-06 13:46:40.534743] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:46.777 [2024-11-06 13:46:40.547654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:46.777 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:46.777 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.777 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:46.777 13:46:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:46.777 { 00:20:46.777 "ublk_device": "/dev/ublkb0", 00:20:46.777 "id": 0, 00:20:46.777 "queue_depth": 512, 00:20:46.777 "num_queues": 4, 00:20:46.777 "bdev_name": "Malloc0" 00:20:46.777 } 00:20:46.777 ]' 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:46.777 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:47.036 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:47.036 13:46:40 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:47.036 13:46:40 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:47.036 fio: verification read phase will never start because write phase uses all of runtime 00:20:47.036 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:47.036 fio-3.35 00:20:47.036 Starting 1 process 00:20:59.262 00:20:59.262 fio_test: (groupid=0, jobs=1): err= 0: pid=73142: Wed Nov 6 13:46:51 2024 00:20:59.262 write: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(442MiB/10002msec); 0 zone resets 00:20:59.262 clat (usec): min=39, max=4014, avg=87.52, stdev=101.40 00:20:59.262 lat (usec): min=40, max=4029, avg=88.00, stdev=101.41 00:20:59.262 clat percentiles (usec): 00:20:59.262 | 1.00th=[ 42], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 80], 00:20:59.262 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:20:59.262 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 93], 95.00th=[ 96], 00:20:59.262 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 2147], 99.95th=[ 2966], 00:20:59.262 | 99.99th=[ 3556] 00:20:59.262 bw ( KiB/s): min=43832, max=60167, per=100.00%, avg=45328.95, stdev=3605.68, samples=19 00:20:59.262 iops : min=10958, max=15041, avg=11332.16, stdev=901.26, samples=19 00:20:59.262 lat (usec) : 50=3.81%, 100=93.68%, 250=2.31%, 500=0.01%, 750=0.01% 00:20:59.262 lat (usec) : 1000=0.01% 00:20:59.262 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:20:59.262 cpu : usr=2.31%, sys=7.84%, ctx=113191, majf=0, minf=796 00:20:59.262 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.262 issued rwts: total=0,113189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.262 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.262 00:20:59.262 Run status group 0 (all jobs): 00:20:59.262 WRITE: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=442MiB (464MB), run=10002-10002msec 00:20:59.262 00:20:59.262 Disk stats (read/write): 00:20:59.262 ublkb0: ios=0/112054, merge=0/0, ticks=0/8937, in_queue=8937, util=99.11% 00:20:59.262 13:46:51 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:59.262 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.262 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.262 [2024-11-06 13:46:51.056118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:59.262 [2024-11-06 13:46:51.092857] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:59.262 [2024-11-06 13:46:51.094075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:59.262 [2024-11-06 13:46:51.101180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:59.262 [2024-11-06 13:46:51.101596] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:59.262 [2024-11-06 13:46:51.101612] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:59.262 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.262 13:46:51 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:59.262 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-06 13:46:51.117192] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:59.263 request: 00:20:59.263 { 00:20:59.263 "ublk_id": 0, 00:20:59.263 "method": "ublk_stop_disk", 00:20:59.263 "req_id": 1 00:20:59.263 } 00:20:59.263 Got JSON-RPC error response 00:20:59.263 response: 00:20:59.263 { 00:20:59.263 "code": -19, 00:20:59.263 "message": "No such device" 00:20:59.263 } 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.263 13:46:51 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-06 13:46:51.133188] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:59.263 [2024-11-06 13:46:51.149058] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:59.263 [2024-11-06 13:46:51.149102] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:51 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:51 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:59.263 13:46:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 13:46:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:59.263 13:46:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:59.263 13:46:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:59.263 13:46:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:59.263 13:46:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 13:46:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:59.263 13:46:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:59.263 13:46:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:59.263 00:20:59.263 real 0m11.971s 00:20:59.263 user 0m0.615s 00:20:59.263 sys 0m0.915s 00:20:59.263 13:46:52 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:59.263 ************************************ 00:20:59.263 END TEST test_create_ublk 00:20:59.263 ************************************ 00:20:59.263 13:46:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 13:46:52 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:59.263 13:46:52 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:59.263 13:46:52 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:59.263 13:46:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 ************************************ 00:20:59.263 START TEST test_create_multi_ublk 00:20:59.263 ************************************ 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-06 13:46:52.170043] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:59.263 [2024-11-06 13:46:52.173375] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-06 13:46:52.516256] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:59.263 [2024-11-06 13:46:52.516848] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:59.263 [2024-11-06 13:46:52.516866] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:59.263 [2024-11-06 13:46:52.516883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:59.263 [2024-11-06 13:46:52.525515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:59.263 [2024-11-06 13:46:52.525544] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:59.263 [2024-11-06 13:46:52.532059] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:59.263 [2024-11-06 13:46:52.532709] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:59.263 [2024-11-06 13:46:52.541410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-06 13:46:52.914259] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:59.263 [2024-11-06 13:46:52.914786] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:59.263 [2024-11-06 13:46:52.914807] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:59.263 [2024-11-06 13:46:52.914816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:59.263 [2024-11-06 13:46:52.922552] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:59.263 [2024-11-06 13:46:52.922575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:59.263 [2024-11-06 13:46:52.929078] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:59.263 [2024-11-06 13:46:52.929744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:59.263 [2024-11-06 13:46:52.948198] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.263 13:46:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.522 [2024-11-06 13:46:53.314197] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:59.522 [2024-11-06 13:46:53.314764] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:59.522 [2024-11-06 13:46:53.314783] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:59.522 [2024-11-06 13:46:53.314796] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:59.522 [2024-11-06 13:46:53.322110] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:59.522 [2024-11-06 13:46:53.322141] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:59.522 [2024-11-06 13:46:53.330095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:59.522 [2024-11-06 13:46:53.330798] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:59.522 [2024-11-06 13:46:53.335511] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:59.522 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:59.523 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:59.523 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.523 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.781 [2024-11-06 13:46:53.690248] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:59.781 [2024-11-06 13:46:53.690805] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:59.781 [2024-11-06 13:46:53.690827] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:59.781 [2024-11-06 13:46:53.690836] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:59.781 [2024-11-06 13:46:53.698151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:59.781 [2024-11-06 13:46:53.698178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:59.781 [2024-11-06 13:46:53.705095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:59.781 [2024-11-06 13:46:53.705812] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:59.781 [2024-11-06 13:46:53.710530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:59.781 { 00:20:59.781 "ublk_device": "/dev/ublkb0", 00:20:59.781 "id": 0, 00:20:59.781 "queue_depth": 512, 00:20:59.781 "num_queues": 4, 00:20:59.781 "bdev_name": "Malloc0" 00:20:59.781 }, 00:20:59.781 { 00:20:59.781 "ublk_device": "/dev/ublkb1", 00:20:59.781 "id": 1, 00:20:59.781 "queue_depth": 512, 00:20:59.781 "num_queues": 4, 00:20:59.781 "bdev_name": "Malloc1" 00:20:59.781 }, 00:20:59.781 { 00:20:59.781 "ublk_device": "/dev/ublkb2", 00:20:59.781 "id": 2, 00:20:59.781 "queue_depth": 512, 00:20:59.781 "num_queues": 4, 00:20:59.781 "bdev_name": "Malloc2" 00:20:59.781 }, 00:20:59.781 { 00:20:59.781 "ublk_device": "/dev/ublkb3", 00:20:59.781 "id": 3, 00:20:59.781 "queue_depth": 512, 00:20:59.781 "num_queues": 4, 00:20:59.781 "bdev_name": "Malloc3" 00:20:59.781 } 00:20:59.781 ]' 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:59.781 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:21:00.040 13:46:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:21:00.298 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:00.557 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:00.816 [2024-11-06 13:46:54.617275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:00.816 [2024-11-06 13:46:54.657798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:00.816 [2024-11-06 13:46:54.659421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:00.816 [2024-11-06 13:46:54.664087] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:00.816 [2024-11-06 13:46:54.664462] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:00.816 [2024-11-06 13:46:54.664483] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:00.816 [2024-11-06 13:46:54.680174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:00.816 [2024-11-06 13:46:54.712747] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:00.816 [2024-11-06 13:46:54.714385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:00.816 [2024-11-06 13:46:54.720100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:00.816 [2024-11-06 13:46:54.720483] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:00.816 [2024-11-06 13:46:54.720504] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:00.816 [2024-11-06 13:46:54.736195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:21:00.816 [2024-11-06 13:46:54.773769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:00.816 [2024-11-06 13:46:54.775226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:21:00.816 [2024-11-06 13:46:54.782081] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:00.816 [2024-11-06 13:46:54.782484] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:21:00.816 [2024-11-06 13:46:54.782507] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.816 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:00.816 [2024-11-06 13:46:54.796205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:21:01.075 [2024-11-06 13:46:54.836063] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:01.075 [2024-11-06 13:46:54.837208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:21:01.075 [2024-11-06 13:46:54.845127] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:01.075 [2024-11-06 13:46:54.845489] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:21:01.075 [2024-11-06 13:46:54.845504] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:21:01.075 13:46:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.075 13:46:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:21:01.334 [2024-11-06 13:46:55.111203] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:01.334 [2024-11-06 13:46:55.119587] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:01.334 [2024-11-06 13:46:55.119634] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:01.334 13:46:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:21:01.334 13:46:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:01.334 13:46:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:01.334 13:46:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.334 13:46:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:02.269 13:46:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.269 13:46:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:02.269 13:46:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:02.269 13:46:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.269 13:46:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:02.528 13:46:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.528 13:46:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:02.528 13:46:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:21:02.528 13:46:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.528 13:46:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:02.787 13:46:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.787 13:46:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:02.787 13:46:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:21:02.787 13:46:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.787 13:46:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:03.402 00:21:03.402 real 0m5.102s 00:21:03.402 user 0m1.113s 00:21:03.402 sys 0m0.221s 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.402 13:46:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:03.402 ************************************ 00:21:03.402 END TEST test_create_multi_ublk 00:21:03.402 ************************************ 00:21:03.402 13:46:57 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:03.402 13:46:57 ublk -- ublk/ublk.sh@147 -- # cleanup 00:21:03.402 13:46:57 ublk -- ublk/ublk.sh@130 -- # killprocess 73084 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@952 -- # '[' -z 73084 ']' 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@956 -- # kill -0 73084 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@957 -- # uname 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73084 00:21:03.402 killing process with pid 73084 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73084' 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@971 -- # kill 73084 00:21:03.402 13:46:57 ublk -- common/autotest_common.sh@976 -- # wait 73084 00:21:04.780 [2024-11-06 13:46:58.667439] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:04.780 [2024-11-06 13:46:58.667544] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:06.157 00:21:06.157 real 0m33.343s 00:21:06.157 user 0m47.639s 00:21:06.157 sys 0m10.498s 00:21:06.157 13:47:00 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:06.157 ************************************ 00:21:06.157 END TEST ublk 00:21:06.157 ************************************ 00:21:06.158 13:47:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:06.417 13:47:00 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:06.417 13:47:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:06.417 13:47:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:06.417 13:47:00 -- common/autotest_common.sh@10 -- # set +x 00:21:06.417 ************************************ 00:21:06.417 START TEST ublk_recovery 00:21:06.417 ************************************ 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:06.417 * Looking for test storage... 00:21:06.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.417 13:47:00 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.417 --rc genhtml_branch_coverage=1 00:21:06.417 --rc genhtml_function_coverage=1 00:21:06.417 --rc genhtml_legend=1 00:21:06.417 --rc geninfo_all_blocks=1 00:21:06.417 --rc geninfo_unexecuted_blocks=1 00:21:06.417 00:21:06.417 ' 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.417 --rc genhtml_branch_coverage=1 00:21:06.417 --rc genhtml_function_coverage=1 00:21:06.417 --rc genhtml_legend=1 00:21:06.417 --rc geninfo_all_blocks=1 00:21:06.417 --rc geninfo_unexecuted_blocks=1 00:21:06.417 00:21:06.417 ' 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.417 --rc genhtml_branch_coverage=1 00:21:06.417 --rc genhtml_function_coverage=1 00:21:06.417 --rc genhtml_legend=1 00:21:06.417 --rc geninfo_all_blocks=1 00:21:06.417 --rc geninfo_unexecuted_blocks=1 00:21:06.417 00:21:06.417 ' 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:06.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.417 --rc genhtml_branch_coverage=1 00:21:06.417 --rc genhtml_function_coverage=1 00:21:06.417 --rc genhtml_legend=1 00:21:06.417 --rc geninfo_all_blocks=1 00:21:06.417 --rc geninfo_unexecuted_blocks=1 00:21:06.417 00:21:06.417 ' 00:21:06.417 13:47:00 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:06.417 13:47:00 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:06.417 13:47:00 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:21:06.417 13:47:00 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73528 00:21:06.417 13:47:00 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:06.417 13:47:00 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.417 13:47:00 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73528 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73528 ']' 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:06.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:06.417 13:47:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.677 [2024-11-06 13:47:00.514885] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:21:06.677 [2024-11-06 13:47:00.515087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73528 ] 00:21:06.935 [2024-11-06 13:47:00.706896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:06.935 [2024-11-06 13:47:00.821804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.935 [2024-11-06 13:47:00.821831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.872 13:47:01 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.872 13:47:01 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:21:07.872 13:47:01 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:21:07.872 13:47:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.872 13:47:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.132 [2024-11-06 13:47:01.856058] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:08.132 [2024-11-06 13:47:01.859670] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:08.132 13:47:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.132 13:47:01 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:08.132 13:47:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.132 13:47:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.132 malloc0 00:21:08.132 13:47:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.132 13:47:02 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:21:08.132 13:47:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.132 13:47:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.132 [2024-11-06 13:47:02.047286] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:21:08.132 [2024-11-06 13:47:02.047459] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:21:08.132 [2024-11-06 13:47:02.047479] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:08.132 [2024-11-06 13:47:02.047494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:08.132 [2024-11-06 13:47:02.056284] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:08.132 [2024-11-06 13:47:02.056315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:08.132 [2024-11-06 13:47:02.063074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:08.132 [2024-11-06 13:47:02.063276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:08.132 [2024-11-06 13:47:02.084072] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:08.132 1 00:21:08.132 13:47:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.132 13:47:02 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:21:09.508 13:47:03 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73569 00:21:09.508 13:47:03 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:21:09.508 13:47:03 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:21:09.508 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:09.508 fio-3.35 00:21:09.508 Starting 1 process 00:21:14.798 13:47:08 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73528 00:21:14.798 13:47:08 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:21:20.072 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73528 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:21:20.072 13:47:13 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73681 00:21:20.072 13:47:13 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:20.072 13:47:13 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.072 13:47:13 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73681 00:21:20.072 13:47:13 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73681 ']' 00:21:20.072 13:47:13 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.072 13:47:13 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:20.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.072 13:47:13 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.072 13:47:13 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:20.072 13:47:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.072 [2024-11-06 13:47:13.256085] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:21:20.072 [2024-11-06 13:47:13.256257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73681 ] 00:21:20.072 [2024-11-06 13:47:13.448481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:20.072 [2024-11-06 13:47:13.633717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.072 [2024-11-06 13:47:13.633739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.639 13:47:14 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:20.639 13:47:14 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:21:20.639 13:47:14 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:21:20.639 13:47:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.639 13:47:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.640 [2024-11-06 13:47:14.545041] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:20.640 [2024-11-06 13:47:14.547835] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:20.640 13:47:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.640 13:47:14 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:20.640 13:47:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.640 13:47:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.899 malloc0 00:21:20.899 13:47:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.899 13:47:14 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:21:20.899 13:47:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.899 13:47:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.899 [2024-11-06 13:47:14.703216] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:21:20.899 [2024-11-06 13:47:14.703260] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:20.899 [2024-11-06 13:47:14.703272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:20.899 [2024-11-06 13:47:14.711076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:20.899 [2024-11-06 13:47:14.711102] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:21:20.899 [2024-11-06 13:47:14.711111] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:21:20.899 [2024-11-06 13:47:14.711208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:21:20.899 1 00:21:20.899 13:47:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.899 13:47:14 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73569 00:21:20.899 [2024-11-06 13:47:14.719045] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:21:20.899 [2024-11-06 13:47:14.723133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:21:20.899 [2024-11-06 13:47:14.733227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:21:20.899 [2024-11-06 13:47:14.733255] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:22:17.131 00:22:17.131 fio_test: (groupid=0, jobs=1): err= 0: pid=73572: Wed Nov 6 13:48:03 2024 00:22:17.131 read: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(4562MiB/60002msec) 00:22:17.131 slat (usec): min=2, max=373, avg= 6.80, stdev= 1.84 00:22:17.131 clat (usec): min=1445, max=6640.8k, avg=3179.35, stdev=45952.02 00:22:17.131 lat (usec): min=1451, max=6640.8k, avg=3186.15, stdev=45952.03 00:22:17.131 clat percentiles (usec): 00:22:17.131 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2606], 00:22:17.131 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2769], 00:22:17.131 | 70.00th=[ 2835], 80.00th=[ 2966], 90.00th=[ 3195], 95.00th=[ 3884], 00:22:17.131 | 99.00th=[ 5407], 99.50th=[ 5800], 99.90th=[ 7046], 99.95th=[ 7504], 00:22:17.131 | 99.99th=[12649] 00:22:17.131 bw ( KiB/s): min= 9485, max=100368, per=100.00%, avg=86596.68, stdev=10913.91, samples=107 00:22:17.131 iops : min= 2371, max=25092, avg=21649.17, stdev=2728.49, samples=107 00:22:17.131 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(4559MiB/60002msec); 0 zone resets 00:22:17.131 slat (usec): min=2, max=255, avg= 6.88, stdev= 1.86 00:22:17.131 clat (usec): min=1562, max=6641.2k, avg=3384.32, stdev=52119.59 00:22:17.131 lat (usec): min=1569, max=6641.2k, avg=3391.20, stdev=52119.59 00:22:17.131 clat percentiles (usec): 00:22:17.131 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2737], 00:22:17.131 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2900], 00:22:17.131 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 3785], 00:22:17.131 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 7111], 99.95th=[ 7635], 00:22:17.131 | 99.99th=[12911] 00:22:17.131 bw ( KiB/s): min= 9413, max=100944, per=100.00%, avg=86554.81, stdev=10878.15, samples=107 00:22:17.131 iops : min= 2353, max=25236, avg=21638.70, stdev=2719.55, samples=107 00:22:17.131 lat (msec) : 2=0.16%, 4=95.39%, 10=4.44%, 20=0.01%, >=2000=0.01% 00:22:17.131 cpu : usr=9.06%, sys=26.36%, ctx=77226, majf=0, minf=13 00:22:17.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:17.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:17.131 issued rwts: total=1167805,1167150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:17.131 00:22:17.131 Run status group 0 (all jobs): 00:22:17.131 READ: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=4562MiB (4783MB), run=60002-60002msec 00:22:17.131 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=4559MiB (4781MB), run=60002-60002msec 00:22:17.131 00:22:17.131 Disk stats (read/write): 00:22:17.131 ublkb1: ios=1165325/1164685, merge=0/0, ticks=3621666/3719244, in_queue=7340910, util=99.92% 00:22:17.131 13:48:03 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:22:17.131 13:48:03 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.131 13:48:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.132 [2024-11-06 13:48:03.383360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:17.132 [2024-11-06 13:48:03.415146] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:17.132 [2024-11-06 13:48:03.415350] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:17.132 [2024-11-06 13:48:03.422061] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:17.132 [2024-11-06 13:48:03.422197] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:17.132 [2024-11-06 13:48:03.422212] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.132 13:48:03 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.132 [2024-11-06 13:48:03.437164] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:17.132 [2024-11-06 13:48:03.446047] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:17.132 [2024-11-06 13:48:03.446086] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.132 13:48:03 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:22:17.132 13:48:03 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:22:17.132 13:48:03 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73681 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 73681 ']' 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 73681 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73681 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:17.132 killing process with pid 73681 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73681' 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@971 -- # kill 73681 00:22:17.132 13:48:03 ublk_recovery -- common/autotest_common.sh@976 -- # wait 73681 00:22:17.132 [2024-11-06 13:48:05.347029] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:17.132 [2024-11-06 13:48:05.347110] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:17.132 00:22:17.132 real 1m6.676s 00:22:17.132 user 1m51.151s 00:22:17.132 sys 0m32.322s 00:22:17.132 ************************************ 00:22:17.132 END TEST ublk_recovery 00:22:17.132 ************************************ 00:22:17.132 13:48:06 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:17.132 13:48:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.132 13:48:06 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@256 -- # timing_exit lib 00:22:17.132 13:48:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.132 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:22:17.132 13:48:06 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:22:17.132 13:48:06 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:17.132 13:48:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:17.132 13:48:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:17.132 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:22:17.132 ************************************ 00:22:17.132 START TEST ftl 00:22:17.132 ************************************ 00:22:17.132 13:48:06 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:17.132 * Looking for test storage... 00:22:17.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.132 13:48:07 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.132 13:48:07 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.132 13:48:07 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.132 13:48:07 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.132 13:48:07 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.132 13:48:07 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.132 13:48:07 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.132 13:48:07 ftl -- scripts/common.sh@344 -- # case "$op" in 00:22:17.132 13:48:07 ftl -- scripts/common.sh@345 -- # : 1 00:22:17.132 13:48:07 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.132 13:48:07 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.132 13:48:07 ftl -- scripts/common.sh@365 -- # decimal 1 00:22:17.132 13:48:07 ftl -- scripts/common.sh@353 -- # local d=1 00:22:17.132 13:48:07 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.132 13:48:07 ftl -- scripts/common.sh@355 -- # echo 1 00:22:17.132 13:48:07 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.132 13:48:07 ftl -- scripts/common.sh@366 -- # decimal 2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@353 -- # local d=2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.132 13:48:07 ftl -- scripts/common.sh@355 -- # echo 2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.132 13:48:07 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.132 13:48:07 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.132 13:48:07 ftl -- scripts/common.sh@368 -- # return 0 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:17.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.132 --rc genhtml_branch_coverage=1 00:22:17.132 --rc genhtml_function_coverage=1 00:22:17.132 --rc genhtml_legend=1 00:22:17.132 --rc geninfo_all_blocks=1 00:22:17.132 --rc geninfo_unexecuted_blocks=1 00:22:17.132 00:22:17.132 ' 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:17.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.132 --rc genhtml_branch_coverage=1 00:22:17.132 --rc genhtml_function_coverage=1 00:22:17.132 --rc genhtml_legend=1 00:22:17.132 --rc geninfo_all_blocks=1 00:22:17.132 --rc geninfo_unexecuted_blocks=1 00:22:17.132 00:22:17.132 ' 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:17.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.132 --rc genhtml_branch_coverage=1 00:22:17.132 --rc genhtml_function_coverage=1 00:22:17.132 --rc genhtml_legend=1 00:22:17.132 --rc geninfo_all_blocks=1 00:22:17.132 --rc geninfo_unexecuted_blocks=1 00:22:17.132 00:22:17.132 ' 00:22:17.132 13:48:07 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:17.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.132 --rc genhtml_branch_coverage=1 00:22:17.132 --rc genhtml_function_coverage=1 00:22:17.132 --rc genhtml_legend=1 00:22:17.132 --rc geninfo_all_blocks=1 00:22:17.132 --rc geninfo_unexecuted_blocks=1 00:22:17.132 00:22:17.132 ' 00:22:17.132 13:48:07 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:17.132 13:48:07 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:17.133 13:48:07 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:17.133 13:48:07 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:17.133 13:48:07 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:17.133 13:48:07 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:17.133 13:48:07 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:17.133 13:48:07 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:17.133 13:48:07 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:17.133 13:48:07 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.133 13:48:07 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.133 13:48:07 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:17.133 13:48:07 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:17.133 13:48:07 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:17.133 13:48:07 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:17.133 13:48:07 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:17.133 13:48:07 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:17.133 13:48:07 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.133 13:48:07 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.133 13:48:07 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:17.133 13:48:07 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:17.133 13:48:07 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:17.133 13:48:07 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:17.133 13:48:07 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:17.133 13:48:07 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:17.133 13:48:07 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:17.133 13:48:07 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:17.133 13:48:07 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:17.133 13:48:07 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:17.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:17.133 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:17.133 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:17.133 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:17.133 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74486 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:22:17.133 13:48:07 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74486 00:22:17.133 13:48:07 ftl -- common/autotest_common.sh@833 -- # '[' -z 74486 ']' 00:22:17.133 13:48:07 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.133 13:48:07 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:17.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.133 13:48:07 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.133 13:48:07 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:17.133 13:48:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:17.133 [2024-11-06 13:48:08.006530] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:22:17.133 [2024-11-06 13:48:08.006710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74486 ] 00:22:17.133 [2024-11-06 13:48:08.209782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.133 [2024-11-06 13:48:08.388289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.133 13:48:08 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.133 13:48:08 ftl -- common/autotest_common.sh@866 -- # return 0 00:22:17.133 13:48:08 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:22:17.133 13:48:09 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:17.133 13:48:10 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:17.133 13:48:10 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:22:17.133 13:48:10 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:22:17.133 13:48:10 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:17.133 13:48:10 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:17.391 13:48:11 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:22:17.391 13:48:11 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:22:17.391 13:48:11 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:22:17.391 13:48:11 ftl -- ftl/ftl.sh@50 -- # break 00:22:17.392 13:48:11 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:22:17.392 13:48:11 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:22:17.392 13:48:11 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:17.392 13:48:11 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:17.650 13:48:11 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:22:17.650 13:48:11 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:22:17.650 13:48:11 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:22:17.650 13:48:11 ftl -- ftl/ftl.sh@63 -- # break 00:22:17.650 13:48:11 ftl -- ftl/ftl.sh@66 -- # killprocess 74486 00:22:17.650 13:48:11 ftl -- common/autotest_common.sh@952 -- # '[' -z 74486 ']' 00:22:17.650 13:48:11 ftl -- common/autotest_common.sh@956 -- # kill -0 74486 00:22:17.650 13:48:11 ftl -- common/autotest_common.sh@957 -- # uname 00:22:17.650 13:48:11 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:17.651 13:48:11 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74486 00:22:17.651 13:48:11 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:17.651 13:48:11 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:17.651 killing process with pid 74486 00:22:17.651 13:48:11 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74486' 00:22:17.651 13:48:11 ftl -- common/autotest_common.sh@971 -- # kill 74486 00:22:17.651 13:48:11 ftl -- common/autotest_common.sh@976 -- # wait 74486 00:22:20.183 13:48:14 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:22:20.183 13:48:14 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:20.442 13:48:14 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:20.442 13:48:14 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:20.442 13:48:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:20.442 ************************************ 00:22:20.442 START TEST ftl_fio_basic 00:22:20.442 ************************************ 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:20.442 * Looking for test storage... 00:22:20.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.442 --rc genhtml_branch_coverage=1 00:22:20.442 --rc genhtml_function_coverage=1 00:22:20.442 --rc genhtml_legend=1 00:22:20.442 --rc geninfo_all_blocks=1 00:22:20.442 --rc geninfo_unexecuted_blocks=1 00:22:20.442 00:22:20.442 ' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.442 --rc genhtml_branch_coverage=1 00:22:20.442 --rc genhtml_function_coverage=1 00:22:20.442 --rc genhtml_legend=1 00:22:20.442 --rc geninfo_all_blocks=1 00:22:20.442 --rc geninfo_unexecuted_blocks=1 00:22:20.442 00:22:20.442 ' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.442 --rc genhtml_branch_coverage=1 00:22:20.442 --rc genhtml_function_coverage=1 00:22:20.442 --rc genhtml_legend=1 00:22:20.442 --rc geninfo_all_blocks=1 00:22:20.442 --rc geninfo_unexecuted_blocks=1 00:22:20.442 00:22:20.442 ' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.442 --rc genhtml_branch_coverage=1 00:22:20.442 --rc genhtml_function_coverage=1 00:22:20.442 --rc genhtml_legend=1 00:22:20.442 --rc geninfo_all_blocks=1 00:22:20.442 --rc geninfo_unexecuted_blocks=1 00:22:20.442 00:22:20.442 ' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:20.442 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74635 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74635 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 74635 ']' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:20.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:20.443 13:48:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:20.701 [2024-11-06 13:48:14.566817] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:22:20.701 [2024-11-06 13:48:14.566985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74635 ] 00:22:20.959 [2024-11-06 13:48:14.752841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:20.959 [2024-11-06 13:48:14.906340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.959 [2024-11-06 13:48:14.906401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.959 [2024-11-06 13:48:14.906416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:22:22.337 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:22:22.596 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:22.855 { 00:22:22.855 "name": "nvme0n1", 00:22:22.855 "aliases": [ 00:22:22.855 "e6193a68-1425-4cea-bc8f-fab4f9650b69" 00:22:22.855 ], 00:22:22.855 "product_name": "NVMe disk", 00:22:22.855 "block_size": 4096, 00:22:22.855 "num_blocks": 1310720, 00:22:22.855 "uuid": "e6193a68-1425-4cea-bc8f-fab4f9650b69", 00:22:22.855 "numa_id": -1, 00:22:22.855 "assigned_rate_limits": { 00:22:22.855 "rw_ios_per_sec": 0, 00:22:22.855 "rw_mbytes_per_sec": 0, 00:22:22.855 "r_mbytes_per_sec": 0, 00:22:22.855 "w_mbytes_per_sec": 0 00:22:22.855 }, 00:22:22.855 "claimed": false, 00:22:22.855 "zoned": false, 00:22:22.855 "supported_io_types": { 00:22:22.855 "read": true, 00:22:22.855 "write": true, 00:22:22.855 "unmap": true, 00:22:22.855 "flush": true, 00:22:22.855 "reset": true, 00:22:22.855 "nvme_admin": true, 00:22:22.855 "nvme_io": true, 00:22:22.855 "nvme_io_md": false, 00:22:22.855 "write_zeroes": true, 00:22:22.855 "zcopy": false, 00:22:22.855 "get_zone_info": false, 00:22:22.855 "zone_management": false, 00:22:22.855 "zone_append": false, 00:22:22.855 "compare": true, 00:22:22.855 "compare_and_write": false, 00:22:22.855 "abort": true, 00:22:22.855 "seek_hole": false, 00:22:22.855 "seek_data": false, 00:22:22.855 "copy": true, 00:22:22.855 "nvme_iov_md": false 00:22:22.855 }, 00:22:22.855 "driver_specific": { 00:22:22.855 "nvme": [ 00:22:22.855 { 00:22:22.855 "pci_address": "0000:00:11.0", 00:22:22.855 "trid": { 00:22:22.855 "trtype": "PCIe", 00:22:22.855 "traddr": "0000:00:11.0" 00:22:22.855 }, 00:22:22.855 "ctrlr_data": { 00:22:22.855 "cntlid": 0, 00:22:22.855 "vendor_id": "0x1b36", 00:22:22.855 "model_number": "QEMU NVMe Ctrl", 00:22:22.855 "serial_number": "12341", 00:22:22.855 "firmware_revision": "8.0.0", 00:22:22.855 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:22.855 "oacs": { 00:22:22.855 "security": 0, 00:22:22.855 "format": 1, 00:22:22.855 "firmware": 0, 00:22:22.855 "ns_manage": 1 00:22:22.855 }, 00:22:22.855 "multi_ctrlr": false, 00:22:22.855 "ana_reporting": false 00:22:22.855 }, 00:22:22.855 "vs": { 00:22:22.855 "nvme_version": "1.4" 00:22:22.855 }, 00:22:22.855 "ns_data": { 00:22:22.855 "id": 1, 00:22:22.855 "can_share": false 00:22:22.855 } 00:22:22.855 } 00:22:22.855 ], 00:22:22.855 "mp_policy": "active_passive" 00:22:22.855 } 00:22:22.855 } 00:22:22.855 ]' 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:22.855 13:48:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:23.114 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:22:23.114 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:23.372 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d96f729a-06e7-4af9-ac34-811209766011 00:22:23.372 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d96f729a-06e7-4af9-ac34-811209766011 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:22:23.631 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:24.199 { 00:22:24.199 "name": "d83ad68b-fb4e-4c3f-8f0d-b347deb4e064", 00:22:24.199 "aliases": [ 00:22:24.199 "lvs/nvme0n1p0" 00:22:24.199 ], 00:22:24.199 "product_name": "Logical Volume", 00:22:24.199 "block_size": 4096, 00:22:24.199 "num_blocks": 26476544, 00:22:24.199 "uuid": "d83ad68b-fb4e-4c3f-8f0d-b347deb4e064", 00:22:24.199 "assigned_rate_limits": { 00:22:24.199 "rw_ios_per_sec": 0, 00:22:24.199 "rw_mbytes_per_sec": 0, 00:22:24.199 "r_mbytes_per_sec": 0, 00:22:24.199 "w_mbytes_per_sec": 0 00:22:24.199 }, 00:22:24.199 "claimed": false, 00:22:24.199 "zoned": false, 00:22:24.199 "supported_io_types": { 00:22:24.199 "read": true, 00:22:24.199 "write": true, 00:22:24.199 "unmap": true, 00:22:24.199 "flush": false, 00:22:24.199 "reset": true, 00:22:24.199 "nvme_admin": false, 00:22:24.199 "nvme_io": false, 00:22:24.199 "nvme_io_md": false, 00:22:24.199 "write_zeroes": true, 00:22:24.199 "zcopy": false, 00:22:24.199 "get_zone_info": false, 00:22:24.199 "zone_management": false, 00:22:24.199 "zone_append": false, 00:22:24.199 "compare": false, 00:22:24.199 "compare_and_write": false, 00:22:24.199 "abort": false, 00:22:24.199 "seek_hole": true, 00:22:24.199 "seek_data": true, 00:22:24.199 "copy": false, 00:22:24.199 "nvme_iov_md": false 00:22:24.199 }, 00:22:24.199 "driver_specific": { 00:22:24.199 "lvol": { 00:22:24.199 "lvol_store_uuid": "d96f729a-06e7-4af9-ac34-811209766011", 00:22:24.199 "base_bdev": "nvme0n1", 00:22:24.199 "thin_provision": true, 00:22:24.199 "num_allocated_clusters": 0, 00:22:24.199 "snapshot": false, 00:22:24.199 "clone": false, 00:22:24.199 "esnap_clone": false 00:22:24.199 } 00:22:24.199 } 00:22:24.199 } 00:22:24.199 ]' 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:22:24.199 13:48:17 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:22:24.458 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:24.718 { 00:22:24.718 "name": "d83ad68b-fb4e-4c3f-8f0d-b347deb4e064", 00:22:24.718 "aliases": [ 00:22:24.718 "lvs/nvme0n1p0" 00:22:24.718 ], 00:22:24.718 "product_name": "Logical Volume", 00:22:24.718 "block_size": 4096, 00:22:24.718 "num_blocks": 26476544, 00:22:24.718 "uuid": "d83ad68b-fb4e-4c3f-8f0d-b347deb4e064", 00:22:24.718 "assigned_rate_limits": { 00:22:24.718 "rw_ios_per_sec": 0, 00:22:24.718 "rw_mbytes_per_sec": 0, 00:22:24.718 "r_mbytes_per_sec": 0, 00:22:24.718 "w_mbytes_per_sec": 0 00:22:24.718 }, 00:22:24.718 "claimed": false, 00:22:24.718 "zoned": false, 00:22:24.718 "supported_io_types": { 00:22:24.718 "read": true, 00:22:24.718 "write": true, 00:22:24.718 "unmap": true, 00:22:24.718 "flush": false, 00:22:24.718 "reset": true, 00:22:24.718 "nvme_admin": false, 00:22:24.718 "nvme_io": false, 00:22:24.718 "nvme_io_md": false, 00:22:24.718 "write_zeroes": true, 00:22:24.718 "zcopy": false, 00:22:24.718 "get_zone_info": false, 00:22:24.718 "zone_management": false, 00:22:24.718 "zone_append": false, 00:22:24.718 "compare": false, 00:22:24.718 "compare_and_write": false, 00:22:24.718 "abort": false, 00:22:24.718 "seek_hole": true, 00:22:24.718 "seek_data": true, 00:22:24.718 "copy": false, 00:22:24.718 "nvme_iov_md": false 00:22:24.718 }, 00:22:24.718 "driver_specific": { 00:22:24.718 "lvol": { 00:22:24.718 "lvol_store_uuid": "d96f729a-06e7-4af9-ac34-811209766011", 00:22:24.718 "base_bdev": "nvme0n1", 00:22:24.718 "thin_provision": true, 00:22:24.718 "num_allocated_clusters": 0, 00:22:24.718 "snapshot": false, 00:22:24.718 "clone": false, 00:22:24.718 "esnap_clone": false 00:22:24.718 } 00:22:24.718 } 00:22:24.718 } 00:22:24.718 ]' 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:22:24.718 13:48:18 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:22:24.977 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 00:22:24.977 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:24.977 { 00:22:24.977 "name": "d83ad68b-fb4e-4c3f-8f0d-b347deb4e064", 00:22:24.977 "aliases": [ 00:22:24.977 "lvs/nvme0n1p0" 00:22:24.977 ], 00:22:24.977 "product_name": "Logical Volume", 00:22:24.977 "block_size": 4096, 00:22:24.977 "num_blocks": 26476544, 00:22:24.977 "uuid": "d83ad68b-fb4e-4c3f-8f0d-b347deb4e064", 00:22:24.977 "assigned_rate_limits": { 00:22:24.977 "rw_ios_per_sec": 0, 00:22:24.977 "rw_mbytes_per_sec": 0, 00:22:24.977 "r_mbytes_per_sec": 0, 00:22:24.977 "w_mbytes_per_sec": 0 00:22:24.977 }, 00:22:24.977 "claimed": false, 00:22:24.977 "zoned": false, 00:22:24.977 "supported_io_types": { 00:22:24.977 "read": true, 00:22:24.977 "write": true, 00:22:24.977 "unmap": true, 00:22:24.977 "flush": false, 00:22:24.977 "reset": true, 00:22:24.977 "nvme_admin": false, 00:22:24.977 "nvme_io": false, 00:22:24.977 "nvme_io_md": false, 00:22:24.977 "write_zeroes": true, 00:22:24.977 "zcopy": false, 00:22:24.977 "get_zone_info": false, 00:22:24.977 "zone_management": false, 00:22:24.977 "zone_append": false, 00:22:24.977 "compare": false, 00:22:24.977 "compare_and_write": false, 00:22:24.977 "abort": false, 00:22:24.977 "seek_hole": true, 00:22:24.977 "seek_data": true, 00:22:24.977 "copy": false, 00:22:24.977 "nvme_iov_md": false 00:22:24.977 }, 00:22:24.977 "driver_specific": { 00:22:24.977 "lvol": { 00:22:24.977 "lvol_store_uuid": "d96f729a-06e7-4af9-ac34-811209766011", 00:22:24.977 "base_bdev": "nvme0n1", 00:22:24.977 "thin_provision": true, 00:22:24.977 "num_allocated_clusters": 0, 00:22:24.977 "snapshot": false, 00:22:24.977 "clone": false, 00:22:24.977 "esnap_clone": false 00:22:24.977 } 00:22:24.977 } 00:22:24.977 } 00:22:24.977 ]' 00:22:25.237 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:25.237 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:22:25.237 13:48:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:25.237 13:48:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:22:25.237 13:48:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:22:25.237 13:48:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:22:25.237 13:48:19 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:22:25.237 13:48:19 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:22:25.237 13:48:19 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d83ad68b-fb4e-4c3f-8f0d-b347deb4e064 -c nvc0n1p0 --l2p_dram_limit 60 00:22:25.502 [2024-11-06 13:48:19.297448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.297504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:25.502 [2024-11-06 13:48:19.297526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:25.502 [2024-11-06 13:48:19.297538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.297657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.297676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.502 [2024-11-06 13:48:19.297694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:25.502 [2024-11-06 13:48:19.297704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.297754] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:25.502 [2024-11-06 13:48:19.298945] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:25.502 [2024-11-06 13:48:19.298990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.299006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.502 [2024-11-06 13:48:19.299036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:22:25.502 [2024-11-06 13:48:19.299050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.299208] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7e66917e-2b4c-45e3-9c23-da51cdbdf28a 00:22:25.502 [2024-11-06 13:48:19.300753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.300788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:25.502 [2024-11-06 13:48:19.300817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:25.502 [2024-11-06 13:48:19.300832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.308586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.308636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.502 [2024-11-06 13:48:19.308652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.660 ms 00:22:25.502 [2024-11-06 13:48:19.308666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.308822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.308839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.502 [2024-11-06 13:48:19.308851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:25.502 [2024-11-06 13:48:19.308867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.308979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.308995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:25.502 [2024-11-06 13:48:19.309007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:25.502 [2024-11-06 13:48:19.309021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.309082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:25.502 [2024-11-06 13:48:19.314780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.314815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.502 [2024-11-06 13:48:19.314832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.702 ms 00:22:25.502 [2024-11-06 13:48:19.314846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.314913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.314926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:25.502 [2024-11-06 13:48:19.314941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:25.502 [2024-11-06 13:48:19.314951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.502 [2024-11-06 13:48:19.315012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:25.502 [2024-11-06 13:48:19.315188] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:25.502 [2024-11-06 13:48:19.315212] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:25.502 [2024-11-06 13:48:19.315227] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:25.502 [2024-11-06 13:48:19.315245] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:25.502 [2024-11-06 13:48:19.315258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:25.502 [2024-11-06 13:48:19.315273] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:25.502 [2024-11-06 13:48:19.315284] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:25.502 [2024-11-06 13:48:19.315297] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:25.502 [2024-11-06 13:48:19.315308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:25.502 [2024-11-06 13:48:19.315322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.502 [2024-11-06 13:48:19.315337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:25.502 [2024-11-06 13:48:19.315352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:22:25.502 [2024-11-06 13:48:19.315363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.503 [2024-11-06 13:48:19.315496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.503 [2024-11-06 13:48:19.315511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:25.503 [2024-11-06 13:48:19.315524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:25.503 [2024-11-06 13:48:19.315535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.503 [2024-11-06 13:48:19.315682] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:25.503 [2024-11-06 13:48:19.315694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:25.503 [2024-11-06 13:48:19.315711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.503 [2024-11-06 13:48:19.315721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:25.503 [2024-11-06 13:48:19.315745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:25.503 [2024-11-06 13:48:19.315766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:25.503 [2024-11-06 13:48:19.315778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.503 [2024-11-06 13:48:19.315800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:25.503 [2024-11-06 13:48:19.315810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:25.503 [2024-11-06 13:48:19.315821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.503 [2024-11-06 13:48:19.315831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:25.503 [2024-11-06 13:48:19.315843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:25.503 [2024-11-06 13:48:19.315855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:25.503 [2024-11-06 13:48:19.315881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:25.503 [2024-11-06 13:48:19.315892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:25.503 [2024-11-06 13:48:19.315914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.503 [2024-11-06 13:48:19.315935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:25.503 [2024-11-06 13:48:19.315944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.503 [2024-11-06 13:48:19.315965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:25.503 [2024-11-06 13:48:19.315977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:25.503 [2024-11-06 13:48:19.315987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.503 [2024-11-06 13:48:19.315998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:25.503 [2024-11-06 13:48:19.316007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:25.503 [2024-11-06 13:48:19.316030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.503 [2024-11-06 13:48:19.316040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:25.503 [2024-11-06 13:48:19.316055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:25.503 [2024-11-06 13:48:19.316064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.503 [2024-11-06 13:48:19.316075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:25.503 [2024-11-06 13:48:19.316101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:25.503 [2024-11-06 13:48:19.316113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.503 [2024-11-06 13:48:19.316122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:25.503 [2024-11-06 13:48:19.316134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:25.503 [2024-11-06 13:48:19.316143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.503 [2024-11-06 13:48:19.316157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:25.503 [2024-11-06 13:48:19.316168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:25.503 [2024-11-06 13:48:19.316186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.503 [2024-11-06 13:48:19.316195] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:25.503 [2024-11-06 13:48:19.316211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:25.503 [2024-11-06 13:48:19.316221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.503 [2024-11-06 13:48:19.316233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.503 [2024-11-06 13:48:19.316245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:25.503 [2024-11-06 13:48:19.316260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:25.503 [2024-11-06 13:48:19.316269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:25.503 [2024-11-06 13:48:19.316281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:25.503 [2024-11-06 13:48:19.316290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:25.503 [2024-11-06 13:48:19.316302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:25.503 [2024-11-06 13:48:19.316315] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:25.503 [2024-11-06 13:48:19.316334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.503 [2024-11-06 13:48:19.316346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:25.503 [2024-11-06 13:48:19.316359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:25.503 [2024-11-06 13:48:19.316370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:25.503 [2024-11-06 13:48:19.316383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:25.503 [2024-11-06 13:48:19.316393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:25.503 [2024-11-06 13:48:19.316406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:25.503 [2024-11-06 13:48:19.316416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:25.503 [2024-11-06 13:48:19.316429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:25.503 [2024-11-06 13:48:19.316439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:25.503 [2024-11-06 13:48:19.316456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:25.503 [2024-11-06 13:48:19.316466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:25.503 [2024-11-06 13:48:19.316479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:25.503 [2024-11-06 13:48:19.316489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:25.503 [2024-11-06 13:48:19.316502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:25.503 [2024-11-06 13:48:19.316512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:25.503 [2024-11-06 13:48:19.316525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.503 [2024-11-06 13:48:19.316539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:25.503 [2024-11-06 13:48:19.316551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:25.503 [2024-11-06 13:48:19.316562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:25.503 [2024-11-06 13:48:19.316575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:25.503 [2024-11-06 13:48:19.316586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.503 [2024-11-06 13:48:19.316599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:25.503 [2024-11-06 13:48:19.316609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:22:25.503 [2024-11-06 13:48:19.316621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.503 [2024-11-06 13:48:19.316727] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:25.503 [2024-11-06 13:48:19.316744] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:29.718 [2024-11-06 13:48:23.005758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.718 [2024-11-06 13:48:23.005845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:29.718 [2024-11-06 13:48:23.005863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3689.009 ms 00:22:29.718 [2024-11-06 13:48:23.005876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.718 [2024-11-06 13:48:23.043187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.043247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.719 [2024-11-06 13:48:23.043263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.900 ms 00:22:29.719 [2024-11-06 13:48:23.043277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.043445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.043462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:29.719 [2024-11-06 13:48:23.043474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:29.719 [2024-11-06 13:48:23.043490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.105060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.105119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.719 [2024-11-06 13:48:23.105154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.474 ms 00:22:29.719 [2024-11-06 13:48:23.105171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.105230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.105243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.719 [2024-11-06 13:48:23.105255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:29.719 [2024-11-06 13:48:23.105275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.105794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.105819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.719 [2024-11-06 13:48:23.105831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:22:29.719 [2024-11-06 13:48:23.105847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.105985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.106000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.719 [2024-11-06 13:48:23.106012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:29.719 [2024-11-06 13:48:23.106045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.128752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.128805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.719 [2024-11-06 13:48:23.128820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.664 ms 00:22:29.719 [2024-11-06 13:48:23.128834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.143190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:29.719 [2024-11-06 13:48:23.160802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.160888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:29.719 [2024-11-06 13:48:23.160915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.810 ms 00:22:29.719 [2024-11-06 13:48:23.160928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.236795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.236860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:29.719 [2024-11-06 13:48:23.236880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.784 ms 00:22:29.719 [2024-11-06 13:48:23.236891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.237126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.237141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:29.719 [2024-11-06 13:48:23.237158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:22:29.719 [2024-11-06 13:48:23.237169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.275911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.275964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:29.719 [2024-11-06 13:48:23.275982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.657 ms 00:22:29.719 [2024-11-06 13:48:23.275993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.314608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.314676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:29.719 [2024-11-06 13:48:23.314697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.542 ms 00:22:29.719 [2024-11-06 13:48:23.314708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.315522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.315559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:29.719 [2024-11-06 13:48:23.315574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:22:29.719 [2024-11-06 13:48:23.315584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.434537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.434598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:29.719 [2024-11-06 13:48:23.434625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.845 ms 00:22:29.719 [2024-11-06 13:48:23.434636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.475319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.475369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:29.719 [2024-11-06 13:48:23.475388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.551 ms 00:22:29.719 [2024-11-06 13:48:23.475399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.514427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.514474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:29.719 [2024-11-06 13:48:23.514493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.951 ms 00:22:29.719 [2024-11-06 13:48:23.514504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.553167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.553221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:29.719 [2024-11-06 13:48:23.553240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.603 ms 00:22:29.719 [2024-11-06 13:48:23.553251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.553316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.553329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:29.719 [2024-11-06 13:48:23.553350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:29.719 [2024-11-06 13:48:23.553360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.553514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.719 [2024-11-06 13:48:23.553527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:29.719 [2024-11-06 13:48:23.553540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:29.719 [2024-11-06 13:48:23.553551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.719 [2024-11-06 13:48:23.554861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4256.886 ms, result 0 00:22:29.719 { 00:22:29.719 "name": "ftl0", 00:22:29.719 "uuid": "7e66917e-2b4c-45e3-9c23-da51cdbdf28a" 00:22:29.719 } 00:22:29.719 13:48:23 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:22:29.719 13:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:22:29.719 13:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:29.720 13:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:22:29.720 13:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:29.720 13:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:29.720 13:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:29.979 13:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:30.239 [ 00:22:30.239 { 00:22:30.239 "name": "ftl0", 00:22:30.239 "aliases": [ 00:22:30.239 "7e66917e-2b4c-45e3-9c23-da51cdbdf28a" 00:22:30.239 ], 00:22:30.239 "product_name": "FTL disk", 00:22:30.239 "block_size": 4096, 00:22:30.239 "num_blocks": 20971520, 00:22:30.239 "uuid": "7e66917e-2b4c-45e3-9c23-da51cdbdf28a", 00:22:30.239 "assigned_rate_limits": { 00:22:30.239 "rw_ios_per_sec": 0, 00:22:30.239 "rw_mbytes_per_sec": 0, 00:22:30.239 "r_mbytes_per_sec": 0, 00:22:30.239 "w_mbytes_per_sec": 0 00:22:30.239 }, 00:22:30.239 "claimed": false, 00:22:30.239 "zoned": false, 00:22:30.239 "supported_io_types": { 00:22:30.239 "read": true, 00:22:30.239 "write": true, 00:22:30.239 "unmap": true, 00:22:30.239 "flush": true, 00:22:30.239 "reset": false, 00:22:30.239 "nvme_admin": false, 00:22:30.239 "nvme_io": false, 00:22:30.239 "nvme_io_md": false, 00:22:30.239 "write_zeroes": true, 00:22:30.239 "zcopy": false, 00:22:30.239 "get_zone_info": false, 00:22:30.239 "zone_management": false, 00:22:30.239 "zone_append": false, 00:22:30.239 "compare": false, 00:22:30.239 "compare_and_write": false, 00:22:30.239 "abort": false, 00:22:30.239 "seek_hole": false, 00:22:30.239 "seek_data": false, 00:22:30.239 "copy": false, 00:22:30.239 "nvme_iov_md": false 00:22:30.239 }, 00:22:30.239 "driver_specific": { 00:22:30.239 "ftl": { 00:22:30.239 "base_bdev": "d83ad68b-fb4e-4c3f-8f0d-b347deb4e064", 00:22:30.239 "cache": "nvc0n1p0" 00:22:30.239 } 00:22:30.239 } 00:22:30.239 } 00:22:30.239 ] 00:22:30.239 13:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:22:30.239 13:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:22:30.239 13:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:30.498 13:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:22:30.498 13:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:30.757 [2024-11-06 13:48:24.600414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.600475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:30.757 [2024-11-06 13:48:24.600492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:30.757 [2024-11-06 13:48:24.600508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.600555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:30.757 [2024-11-06 13:48:24.605393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.605433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:30.757 [2024-11-06 13:48:24.605451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.811 ms 00:22:30.757 [2024-11-06 13:48:24.605463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.606058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.606083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:30.757 [2024-11-06 13:48:24.606099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:22:30.757 [2024-11-06 13:48:24.606110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.609147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.609178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:30.757 [2024-11-06 13:48:24.609200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:22:30.757 [2024-11-06 13:48:24.609215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.615181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.615219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:30.757 [2024-11-06 13:48:24.615237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.889 ms 00:22:30.757 [2024-11-06 13:48:24.615266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.658085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.658159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:30.757 [2024-11-06 13:48:24.658180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.692 ms 00:22:30.757 [2024-11-06 13:48:24.658191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.684402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.684480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:30.757 [2024-11-06 13:48:24.684504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:22:30.757 [2024-11-06 13:48:24.684515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.684879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.684898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:30.757 [2024-11-06 13:48:24.684913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:22:30.757 [2024-11-06 13:48:24.684937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.757 [2024-11-06 13:48:24.729429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.757 [2024-11-06 13:48:24.729523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:30.757 [2024-11-06 13:48:24.729545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.443 ms 00:22:30.757 [2024-11-06 13:48:24.729557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.018 [2024-11-06 13:48:24.773398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.018 [2024-11-06 13:48:24.773483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:31.018 [2024-11-06 13:48:24.773504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.722 ms 00:22:31.018 [2024-11-06 13:48:24.773515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.018 [2024-11-06 13:48:24.815516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.018 [2024-11-06 13:48:24.815580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:31.018 [2024-11-06 13:48:24.815600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.887 ms 00:22:31.018 [2024-11-06 13:48:24.815612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.018 [2024-11-06 13:48:24.859005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.018 [2024-11-06 13:48:24.859084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:31.018 [2024-11-06 13:48:24.859107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.186 ms 00:22:31.018 [2024-11-06 13:48:24.859120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.018 [2024-11-06 13:48:24.859214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:31.018 [2024-11-06 13:48:24.859236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:31.018 [2024-11-06 13:48:24.859526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.859988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:31.019 [2024-11-06 13:48:24.860559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:31.020 [2024-11-06 13:48:24.860577] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:31.020 [2024-11-06 13:48:24.860590] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7e66917e-2b4c-45e3-9c23-da51cdbdf28a 00:22:31.020 [2024-11-06 13:48:24.860601] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:31.020 [2024-11-06 13:48:24.860616] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:31.020 [2024-11-06 13:48:24.860629] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:31.020 [2024-11-06 13:48:24.860642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:31.020 [2024-11-06 13:48:24.860652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:31.020 [2024-11-06 13:48:24.860665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:31.020 [2024-11-06 13:48:24.860674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:31.020 [2024-11-06 13:48:24.860687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:31.020 [2024-11-06 13:48:24.860696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:31.020 [2024-11-06 13:48:24.860709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.020 [2024-11-06 13:48:24.860720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:31.020 [2024-11-06 13:48:24.860735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:22:31.020 [2024-11-06 13:48:24.860745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.020 [2024-11-06 13:48:24.882446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.020 [2024-11-06 13:48:24.882512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:31.020 [2024-11-06 13:48:24.882531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.594 ms 00:22:31.020 [2024-11-06 13:48:24.882542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.020 [2024-11-06 13:48:24.883123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.020 [2024-11-06 13:48:24.883141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:31.020 [2024-11-06 13:48:24.883155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:22:31.020 [2024-11-06 13:48:24.883166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.020 [2024-11-06 13:48:24.956250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.020 [2024-11-06 13:48:24.956313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.020 [2024-11-06 13:48:24.956332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.020 [2024-11-06 13:48:24.956343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.020 [2024-11-06 13:48:24.956448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.020 [2024-11-06 13:48:24.956460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.020 [2024-11-06 13:48:24.956474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.020 [2024-11-06 13:48:24.956484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.020 [2024-11-06 13:48:24.956653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.020 [2024-11-06 13:48:24.956668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.020 [2024-11-06 13:48:24.956681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.020 [2024-11-06 13:48:24.956691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.020 [2024-11-06 13:48:24.956736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.020 [2024-11-06 13:48:24.956746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.020 [2024-11-06 13:48:24.956759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.020 [2024-11-06 13:48:24.956769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.095521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.095577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.280 [2024-11-06 13:48:25.095594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.095605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.200526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.200587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.280 [2024-11-06 13:48:25.200606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.200633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.200770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.200783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:31.280 [2024-11-06 13:48:25.200800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.200810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.200924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.200936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:31.280 [2024-11-06 13:48:25.200949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.200959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.201120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.201137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:31.280 [2024-11-06 13:48:25.201154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.201164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.201235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.201252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:31.280 [2024-11-06 13:48:25.201266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.201276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.201334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.201345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:31.280 [2024-11-06 13:48:25.201359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.201371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.201441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.280 [2024-11-06 13:48:25.201456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:31.280 [2024-11-06 13:48:25.201469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.280 [2024-11-06 13:48:25.201485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.280 [2024-11-06 13:48:25.201690] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 601.247 ms, result 0 00:22:31.280 true 00:22:31.280 13:48:25 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74635 00:22:31.280 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 74635 ']' 00:22:31.280 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 74635 00:22:31.280 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:22:31.280 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:31.280 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74635 00:22:31.540 killing process with pid 74635 00:22:31.540 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:31.540 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:31.540 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74635' 00:22:31.540 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 74635 00:22:31.540 13:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 74635 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:36.944 13:48:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:36.944 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:36.944 fio-3.35 00:22:36.944 Starting 1 thread 00:22:42.221 00:22:42.221 test: (groupid=0, jobs=1): err= 0: pid=74870: Wed Nov 6 13:48:35 2024 00:22:42.221 read: IOPS=974, BW=64.7MiB/s (67.8MB/s)(255MiB/3934msec) 00:22:42.221 slat (nsec): min=7396, max=42796, avg=10779.59, stdev=3335.70 00:22:42.221 clat (usec): min=330, max=741, avg=448.76, stdev=57.81 00:22:42.221 lat (usec): min=342, max=757, avg=459.54, stdev=58.50 00:22:42.221 clat percentiles (usec): 00:22:42.221 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 392], 00:22:42.221 | 30.00th=[ 429], 40.00th=[ 437], 50.00th=[ 445], 60.00th=[ 453], 00:22:42.221 | 70.00th=[ 474], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 537], 00:22:42.221 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 660], 99.95th=[ 725], 00:22:42.221 | 99.99th=[ 742] 00:22:42.221 write: IOPS=981, BW=65.2MiB/s (68.3MB/s)(256MiB/3929msec); 0 zone resets 00:22:42.221 slat (nsec): min=18366, max=81515, avg=29315.20, stdev=5476.95 00:22:42.221 clat (usec): min=360, max=1640, avg=520.89, stdev=70.85 00:22:42.221 lat (usec): min=388, max=1668, avg=550.21, stdev=71.43 00:22:42.221 clat percentiles (usec): 00:22:42.221 | 1.00th=[ 392], 5.00th=[ 437], 10.00th=[ 449], 20.00th=[ 465], 00:22:42.221 | 30.00th=[ 478], 40.00th=[ 502], 50.00th=[ 523], 60.00th=[ 529], 00:22:42.221 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 619], 00:22:42.221 | 99.00th=[ 799], 99.50th=[ 840], 99.90th=[ 922], 99.95th=[ 1012], 00:22:42.221 | 99.99th=[ 1647] 00:22:42.221 bw ( KiB/s): min=63113, max=69904, per=99.71%, avg=66544.14, stdev=2473.11, samples=7 00:22:42.221 iops : min= 928, max= 1028, avg=978.57, stdev=36.40, samples=7 00:22:42.221 lat (usec) : 500=58.68%, 750=40.66%, 1000=0.64% 00:22:42.221 lat (msec) : 2=0.03% 00:22:42.221 cpu : usr=99.06%, sys=0.18%, ctx=6, majf=0, minf=1169 00:22:42.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:42.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.221 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:42.221 00:22:42.221 Run status group 0 (all jobs): 00:22:42.221 READ: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=255MiB (267MB), run=3934-3934msec 00:22:42.221 WRITE: bw=65.2MiB/s (68.3MB/s), 65.2MiB/s-65.2MiB/s (68.3MB/s-68.3MB/s), io=256MiB (269MB), run=3929-3929msec 00:22:44.128 ----------------------------------------------------- 00:22:44.128 Suppressions used: 00:22:44.128 count bytes template 00:22:44.128 1 5 /usr/src/fio/parse.c 00:22:44.128 1 8 libtcmalloc_minimal.so 00:22:44.128 1 904 libcrypto.so 00:22:44.128 ----------------------------------------------------- 00:22:44.128 00:22:44.128 13:48:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:44.128 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.129 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:44.388 13:48:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:44.781 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:44.781 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:44.781 fio-3.35 00:22:44.781 Starting 2 threads 00:23:16.859 00:23:16.859 first_half: (groupid=0, jobs=1): err= 0: pid=74979: Wed Nov 6 13:49:05 2024 00:23:16.859 read: IOPS=2576, BW=10.1MiB/s (10.6MB/s)(255MiB/25351msec) 00:23:16.859 slat (usec): min=3, max=104, avg= 7.63, stdev= 3.61 00:23:16.859 clat (usec): min=939, max=332403, avg=39413.36, stdev=21467.06 00:23:16.859 lat (usec): min=955, max=332408, avg=39421.00, stdev=21467.07 00:23:16.859 clat percentiles (msec): 00:23:16.859 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:23:16.859 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:23:16.859 | 70.00th=[ 35], 80.00th=[ 41], 90.00th=[ 42], 95.00th=[ 55], 00:23:16.859 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 203], 99.95th=[ 279], 00:23:16.859 | 99.99th=[ 321] 00:23:16.859 write: IOPS=2930, BW=11.4MiB/s (12.0MB/s)(256MiB/22362msec); 0 zone resets 00:23:16.859 slat (usec): min=4, max=380, avg= 9.61, stdev= 6.37 00:23:16.859 clat (usec): min=433, max=110835, avg=10200.69, stdev=17338.14 00:23:16.859 lat (usec): min=439, max=110851, avg=10210.30, stdev=17338.61 00:23:16.859 clat percentiles (usec): 00:23:16.859 | 1.00th=[ 1012], 5.00th=[ 1336], 10.00th=[ 1565], 20.00th=[ 2040], 00:23:16.859 | 30.00th=[ 3326], 40.00th=[ 4817], 50.00th=[ 5932], 60.00th=[ 6915], 00:23:16.859 | 70.00th=[ 8160], 80.00th=[ 11600], 90.00th=[ 14746], 95.00th=[ 34341], 00:23:16.859 | 99.00th=[ 94897], 99.50th=[ 98042], 99.90th=[104334], 99.95th=[107480], 00:23:16.859 | 99.99th=[109577] 00:23:16.859 bw ( KiB/s): min= 952, max=38536, per=92.63%, avg=20971.52, stdev=12346.15, samples=25 00:23:16.859 iops : min= 238, max= 9634, avg=5242.88, stdev=3086.54, samples=25 00:23:16.859 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.37% 00:23:16.859 lat (msec) : 2=9.27%, 4=7.88%, 10=20.87%, 20=9.11%, 50=47.18% 00:23:16.859 lat (msec) : 100=3.48%, 250=1.69%, 500=0.04% 00:23:16.859 cpu : usr=99.06%, sys=0.28%, ctx=111, majf=0, minf=5605 00:23:16.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:16.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.859 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:16.859 issued rwts: total=65315,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:16.859 second_half: (groupid=0, jobs=1): err= 0: pid=74980: Wed Nov 6 13:49:05 2024 00:23:16.859 read: IOPS=2561, BW=10.0MiB/s (10.5MB/s)(255MiB/25527msec) 00:23:16.859 slat (nsec): min=3790, max=98279, avg=6751.08, stdev=2292.20 00:23:16.859 clat (usec): min=993, max=339899, avg=38830.36, stdev=24124.87 00:23:16.859 lat (usec): min=1003, max=339907, avg=38837.11, stdev=24125.17 00:23:16.859 clat percentiles (msec): 00:23:16.859 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 34], 00:23:16.859 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:23:16.859 | 70.00th=[ 35], 80.00th=[ 40], 90.00th=[ 42], 95.00th=[ 50], 00:23:16.859 | 99.00th=[ 178], 99.50th=[ 199], 99.90th=[ 264], 99.95th=[ 279], 00:23:16.859 | 99.99th=[ 334] 00:23:16.859 write: IOPS=2829, BW=11.1MiB/s (11.6MB/s)(256MiB/23158msec); 0 zone resets 00:23:16.859 slat (usec): min=4, max=391, avg= 8.81, stdev= 5.08 00:23:16.859 clat (usec): min=451, max=111414, avg=11087.53, stdev=18280.47 00:23:16.859 lat (usec): min=458, max=111425, avg=11096.35, stdev=18280.92 00:23:16.859 clat percentiles (usec): 00:23:16.859 | 1.00th=[ 914], 5.00th=[ 1254], 10.00th=[ 1483], 20.00th=[ 2278], 00:23:16.859 | 30.00th=[ 3916], 40.00th=[ 5080], 50.00th=[ 5604], 60.00th=[ 6652], 00:23:16.859 | 70.00th=[ 7767], 80.00th=[ 11863], 90.00th=[ 19792], 95.00th=[ 44303], 00:23:16.859 | 99.00th=[ 95945], 99.50th=[100140], 99.90th=[106431], 99.95th=[108528], 00:23:16.859 | 99.99th=[111674] 00:23:16.859 bw ( KiB/s): min= 392, max=57304, per=89.07%, avg=20164.92, stdev=15520.10, samples=26 00:23:16.859 iops : min= 98, max=14326, avg=5041.23, stdev=3880.03, samples=26 00:23:16.859 lat (usec) : 500=0.01%, 750=0.10%, 1000=0.71% 00:23:16.859 lat (msec) : 2=8.31%, 4=6.51%, 10=24.19%, 20=7.22%, 50=48.04% 00:23:16.859 lat (msec) : 100=3.19%, 250=1.66%, 500=0.07% 00:23:16.859 cpu : usr=99.08%, sys=0.25%, ctx=34, majf=0, minf=5502 00:23:16.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:16.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:16.859 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:16.859 issued rwts: total=65395,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:16.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:16.859 00:23:16.859 Run status group 0 (all jobs): 00:23:16.859 READ: bw=20.0MiB/s (21.0MB/s), 10.0MiB/s-10.1MiB/s (10.5MB/s-10.6MB/s), io=511MiB (535MB), run=25351-25527msec 00:23:16.859 WRITE: bw=22.1MiB/s (23.2MB/s), 11.1MiB/s-11.4MiB/s (11.6MB/s-12.0MB/s), io=512MiB (537MB), run=22362-23158msec 00:23:16.859 ----------------------------------------------------- 00:23:16.859 Suppressions used: 00:23:16.859 count bytes template 00:23:16.859 2 10 /usr/src/fio/parse.c 00:23:16.859 2 192 /usr/src/fio/iolog.c 00:23:16.859 1 8 libtcmalloc_minimal.so 00:23:16.859 1 904 libcrypto.so 00:23:16.859 ----------------------------------------------------- 00:23:16.859 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:23:16.859 13:49:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:16.859 13:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:16.859 13:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:16.859 13:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:23:16.859 13:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:16.859 13:49:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:16.859 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:16.859 fio-3.35 00:23:16.859 Starting 1 thread 00:23:31.735 00:23:31.735 test: (groupid=0, jobs=1): err= 0: pid=75315: Wed Nov 6 13:49:23 2024 00:23:31.735 read: IOPS=7349, BW=28.7MiB/s (30.1MB/s)(255MiB/8872msec) 00:23:31.735 slat (nsec): min=3669, max=37657, avg=5838.18, stdev=1884.00 00:23:31.735 clat (usec): min=747, max=34155, avg=17406.29, stdev=926.82 00:23:31.735 lat (usec): min=751, max=34162, avg=17412.13, stdev=926.80 00:23:31.735 clat percentiles (usec): 00:23:31.735 | 1.00th=[16450], 5.00th=[16581], 10.00th=[16712], 20.00th=[16909], 00:23:31.735 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17171], 60.00th=[17433], 00:23:31.735 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:23:31.735 | 99.00th=[20317], 99.50th=[21365], 99.90th=[25560], 99.95th=[29754], 00:23:31.735 | 99.99th=[33424] 00:23:31.735 write: IOPS=13.4k, BW=52.5MiB/s (55.1MB/s)(256MiB/4873msec); 0 zone resets 00:23:31.735 slat (usec): min=4, max=548, avg= 8.48, stdev= 5.50 00:23:31.735 clat (usec): min=582, max=54345, avg=9466.60, stdev=11338.92 00:23:31.735 lat (usec): min=590, max=54352, avg=9475.09, stdev=11338.91 00:23:31.735 clat percentiles (usec): 00:23:31.735 | 1.00th=[ 848], 5.00th=[ 996], 10.00th=[ 1090], 20.00th=[ 1237], 00:23:31.735 | 30.00th=[ 1401], 40.00th=[ 1745], 50.00th=[ 6652], 60.00th=[ 7767], 00:23:31.735 | 70.00th=[ 8848], 80.00th=[10552], 90.00th=[33817], 95.00th=[35390], 00:23:31.735 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40109], 99.95th=[45351], 00:23:31.735 | 99.99th=[52691] 00:23:31.735 bw ( KiB/s): min=35208, max=64640, per=97.46%, avg=52428.80, stdev=8719.79, samples=10 00:23:31.735 iops : min= 8802, max=16160, avg=13107.20, stdev=2179.95, samples=10 00:23:31.735 lat (usec) : 750=0.12%, 1000=2.56% 00:23:31.735 lat (msec) : 2=17.92%, 4=0.49%, 10=17.48%, 20=52.23%, 50=9.20% 00:23:31.735 lat (msec) : 100=0.01% 00:23:31.735 cpu : usr=98.87%, sys=0.36%, ctx=21, majf=0, minf=5565 00:23:31.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:31.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.735 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:31.735 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:31.735 00:23:31.735 Run status group 0 (all jobs): 00:23:31.735 READ: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=255MiB (267MB), run=8872-8872msec 00:23:31.735 WRITE: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=256MiB (268MB), run=4873-4873msec 00:23:31.735 ----------------------------------------------------- 00:23:31.735 Suppressions used: 00:23:31.735 count bytes template 00:23:31.735 1 5 /usr/src/fio/parse.c 00:23:31.735 2 192 /usr/src/fio/iolog.c 00:23:31.735 1 8 libtcmalloc_minimal.so 00:23:31.735 1 904 libcrypto.so 00:23:31.735 ----------------------------------------------------- 00:23:31.735 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:31.735 Remove shared memory files 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57922 /dev/shm/spdk_tgt_trace.pid73528 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:23:31.735 ************************************ 00:23:31.735 END TEST ftl_fio_basic 00:23:31.735 ************************************ 00:23:31.735 00:23:31.735 real 1m11.329s 00:23:31.735 user 2m35.737s 00:23:31.735 sys 0m4.267s 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:31.735 13:49:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:31.735 13:49:25 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:31.735 13:49:25 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:31.735 13:49:25 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:31.735 13:49:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:31.735 ************************************ 00:23:31.735 START TEST ftl_bdevperf 00:23:31.735 ************************************ 00:23:31.735 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:31.735 * Looking for test storage... 00:23:31.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:31.735 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:31.735 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:31.735 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:31.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.995 --rc genhtml_branch_coverage=1 00:23:31.995 --rc genhtml_function_coverage=1 00:23:31.995 --rc genhtml_legend=1 00:23:31.995 --rc geninfo_all_blocks=1 00:23:31.995 --rc geninfo_unexecuted_blocks=1 00:23:31.995 00:23:31.995 ' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:31.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.995 --rc genhtml_branch_coverage=1 00:23:31.995 --rc genhtml_function_coverage=1 00:23:31.995 --rc genhtml_legend=1 00:23:31.995 --rc geninfo_all_blocks=1 00:23:31.995 --rc geninfo_unexecuted_blocks=1 00:23:31.995 00:23:31.995 ' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:31.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.995 --rc genhtml_branch_coverage=1 00:23:31.995 --rc genhtml_function_coverage=1 00:23:31.995 --rc genhtml_legend=1 00:23:31.995 --rc geninfo_all_blocks=1 00:23:31.995 --rc geninfo_unexecuted_blocks=1 00:23:31.995 00:23:31.995 ' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:31.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.995 --rc genhtml_branch_coverage=1 00:23:31.995 --rc genhtml_function_coverage=1 00:23:31.995 --rc genhtml_legend=1 00:23:31.995 --rc geninfo_all_blocks=1 00:23:31.995 --rc geninfo_unexecuted_blocks=1 00:23:31.995 00:23:31.995 ' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75554 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75554 00:23:31.995 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 75554 ']' 00:23:31.996 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.996 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:31.996 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.996 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:31.996 13:49:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:31.996 [2024-11-06 13:49:25.918434] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:23:31.996 [2024-11-06 13:49:25.918851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75554 ] 00:23:32.255 [2024-11-06 13:49:26.111547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.255 [2024-11-06 13:49:26.228948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:23:33.191 13:49:26 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:23:33.449 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:33.708 { 00:23:33.708 "name": "nvme0n1", 00:23:33.708 "aliases": [ 00:23:33.708 "cac3feb1-db6c-4478-82e5-18957e99dc5c" 00:23:33.708 ], 00:23:33.708 "product_name": "NVMe disk", 00:23:33.708 "block_size": 4096, 00:23:33.708 "num_blocks": 1310720, 00:23:33.708 "uuid": "cac3feb1-db6c-4478-82e5-18957e99dc5c", 00:23:33.708 "numa_id": -1, 00:23:33.708 "assigned_rate_limits": { 00:23:33.708 "rw_ios_per_sec": 0, 00:23:33.708 "rw_mbytes_per_sec": 0, 00:23:33.708 "r_mbytes_per_sec": 0, 00:23:33.708 "w_mbytes_per_sec": 0 00:23:33.708 }, 00:23:33.708 "claimed": true, 00:23:33.708 "claim_type": "read_many_write_one", 00:23:33.708 "zoned": false, 00:23:33.708 "supported_io_types": { 00:23:33.708 "read": true, 00:23:33.708 "write": true, 00:23:33.708 "unmap": true, 00:23:33.708 "flush": true, 00:23:33.708 "reset": true, 00:23:33.708 "nvme_admin": true, 00:23:33.708 "nvme_io": true, 00:23:33.708 "nvme_io_md": false, 00:23:33.708 "write_zeroes": true, 00:23:33.708 "zcopy": false, 00:23:33.708 "get_zone_info": false, 00:23:33.708 "zone_management": false, 00:23:33.708 "zone_append": false, 00:23:33.708 "compare": true, 00:23:33.708 "compare_and_write": false, 00:23:33.708 "abort": true, 00:23:33.708 "seek_hole": false, 00:23:33.708 "seek_data": false, 00:23:33.708 "copy": true, 00:23:33.708 "nvme_iov_md": false 00:23:33.708 }, 00:23:33.708 "driver_specific": { 00:23:33.708 "nvme": [ 00:23:33.708 { 00:23:33.708 "pci_address": "0000:00:11.0", 00:23:33.708 "trid": { 00:23:33.708 "trtype": "PCIe", 00:23:33.708 "traddr": "0000:00:11.0" 00:23:33.708 }, 00:23:33.708 "ctrlr_data": { 00:23:33.708 "cntlid": 0, 00:23:33.708 "vendor_id": "0x1b36", 00:23:33.708 "model_number": "QEMU NVMe Ctrl", 00:23:33.708 "serial_number": "12341", 00:23:33.708 "firmware_revision": "8.0.0", 00:23:33.708 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:33.708 "oacs": { 00:23:33.708 "security": 0, 00:23:33.708 "format": 1, 00:23:33.708 "firmware": 0, 00:23:33.708 "ns_manage": 1 00:23:33.708 }, 00:23:33.708 "multi_ctrlr": false, 00:23:33.708 "ana_reporting": false 00:23:33.708 }, 00:23:33.708 "vs": { 00:23:33.708 "nvme_version": "1.4" 00:23:33.708 }, 00:23:33.708 "ns_data": { 00:23:33.708 "id": 1, 00:23:33.708 "can_share": false 00:23:33.708 } 00:23:33.708 } 00:23:33.708 ], 00:23:33.708 "mp_policy": "active_passive" 00:23:33.708 } 00:23:33.708 } 00:23:33.708 ]' 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:33.708 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:33.967 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d96f729a-06e7-4af9-ac34-811209766011 00:23:33.967 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:23:33.967 13:49:27 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d96f729a-06e7-4af9-ac34-811209766011 00:23:34.225 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f3788ede-fdef-4c56-90f1-ec61c0f50730 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f3788ede-fdef-4c56-90f1-ec61c0f50730 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:23:34.483 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:34.742 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:34.742 { 00:23:34.742 "name": "0dd0f7d3-683f-4128-9757-cf7f0dcb34a1", 00:23:34.742 "aliases": [ 00:23:34.742 "lvs/nvme0n1p0" 00:23:34.742 ], 00:23:34.742 "product_name": "Logical Volume", 00:23:34.742 "block_size": 4096, 00:23:34.742 "num_blocks": 26476544, 00:23:34.742 "uuid": "0dd0f7d3-683f-4128-9757-cf7f0dcb34a1", 00:23:34.742 "assigned_rate_limits": { 00:23:34.742 "rw_ios_per_sec": 0, 00:23:34.742 "rw_mbytes_per_sec": 0, 00:23:34.742 "r_mbytes_per_sec": 0, 00:23:34.742 "w_mbytes_per_sec": 0 00:23:34.742 }, 00:23:34.742 "claimed": false, 00:23:34.742 "zoned": false, 00:23:34.742 "supported_io_types": { 00:23:34.742 "read": true, 00:23:34.742 "write": true, 00:23:34.742 "unmap": true, 00:23:34.742 "flush": false, 00:23:34.742 "reset": true, 00:23:34.742 "nvme_admin": false, 00:23:34.742 "nvme_io": false, 00:23:34.742 "nvme_io_md": false, 00:23:34.742 "write_zeroes": true, 00:23:34.742 "zcopy": false, 00:23:34.742 "get_zone_info": false, 00:23:34.742 "zone_management": false, 00:23:34.742 "zone_append": false, 00:23:34.742 "compare": false, 00:23:34.742 "compare_and_write": false, 00:23:34.742 "abort": false, 00:23:34.742 "seek_hole": true, 00:23:34.742 "seek_data": true, 00:23:34.742 "copy": false, 00:23:34.742 "nvme_iov_md": false 00:23:34.742 }, 00:23:34.742 "driver_specific": { 00:23:34.742 "lvol": { 00:23:34.742 "lvol_store_uuid": "f3788ede-fdef-4c56-90f1-ec61c0f50730", 00:23:34.742 "base_bdev": "nvme0n1", 00:23:34.742 "thin_provision": true, 00:23:34.742 "num_allocated_clusters": 0, 00:23:34.742 "snapshot": false, 00:23:34.742 "clone": false, 00:23:34.742 "esnap_clone": false 00:23:34.742 } 00:23:34.742 } 00:23:34.742 } 00:23:34.742 ]' 00:23:34.742 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:34.742 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:23:34.742 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:23:35.001 13:49:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:35.599 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:35.599 { 00:23:35.599 "name": "0dd0f7d3-683f-4128-9757-cf7f0dcb34a1", 00:23:35.599 "aliases": [ 00:23:35.599 "lvs/nvme0n1p0" 00:23:35.599 ], 00:23:35.599 "product_name": "Logical Volume", 00:23:35.599 "block_size": 4096, 00:23:35.599 "num_blocks": 26476544, 00:23:35.599 "uuid": "0dd0f7d3-683f-4128-9757-cf7f0dcb34a1", 00:23:35.599 "assigned_rate_limits": { 00:23:35.599 "rw_ios_per_sec": 0, 00:23:35.599 "rw_mbytes_per_sec": 0, 00:23:35.599 "r_mbytes_per_sec": 0, 00:23:35.599 "w_mbytes_per_sec": 0 00:23:35.599 }, 00:23:35.599 "claimed": false, 00:23:35.599 "zoned": false, 00:23:35.599 "supported_io_types": { 00:23:35.599 "read": true, 00:23:35.599 "write": true, 00:23:35.599 "unmap": true, 00:23:35.599 "flush": false, 00:23:35.599 "reset": true, 00:23:35.599 "nvme_admin": false, 00:23:35.599 "nvme_io": false, 00:23:35.599 "nvme_io_md": false, 00:23:35.599 "write_zeroes": true, 00:23:35.599 "zcopy": false, 00:23:35.599 "get_zone_info": false, 00:23:35.599 "zone_management": false, 00:23:35.599 "zone_append": false, 00:23:35.599 "compare": false, 00:23:35.599 "compare_and_write": false, 00:23:35.599 "abort": false, 00:23:35.599 "seek_hole": true, 00:23:35.599 "seek_data": true, 00:23:35.599 "copy": false, 00:23:35.599 "nvme_iov_md": false 00:23:35.599 }, 00:23:35.599 "driver_specific": { 00:23:35.599 "lvol": { 00:23:35.599 "lvol_store_uuid": "f3788ede-fdef-4c56-90f1-ec61c0f50730", 00:23:35.600 "base_bdev": "nvme0n1", 00:23:35.600 "thin_provision": true, 00:23:35.600 "num_allocated_clusters": 0, 00:23:35.600 "snapshot": false, 00:23:35.600 "clone": false, 00:23:35.600 "esnap_clone": false 00:23:35.600 } 00:23:35.600 } 00:23:35.600 } 00:23:35.600 ]' 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:23:35.600 13:49:29 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:35.859 { 00:23:35.859 "name": "0dd0f7d3-683f-4128-9757-cf7f0dcb34a1", 00:23:35.859 "aliases": [ 00:23:35.859 "lvs/nvme0n1p0" 00:23:35.859 ], 00:23:35.859 "product_name": "Logical Volume", 00:23:35.859 "block_size": 4096, 00:23:35.859 "num_blocks": 26476544, 00:23:35.859 "uuid": "0dd0f7d3-683f-4128-9757-cf7f0dcb34a1", 00:23:35.859 "assigned_rate_limits": { 00:23:35.859 "rw_ios_per_sec": 0, 00:23:35.859 "rw_mbytes_per_sec": 0, 00:23:35.859 "r_mbytes_per_sec": 0, 00:23:35.859 "w_mbytes_per_sec": 0 00:23:35.859 }, 00:23:35.859 "claimed": false, 00:23:35.859 "zoned": false, 00:23:35.859 "supported_io_types": { 00:23:35.859 "read": true, 00:23:35.859 "write": true, 00:23:35.859 "unmap": true, 00:23:35.859 "flush": false, 00:23:35.859 "reset": true, 00:23:35.859 "nvme_admin": false, 00:23:35.859 "nvme_io": false, 00:23:35.859 "nvme_io_md": false, 00:23:35.859 "write_zeroes": true, 00:23:35.859 "zcopy": false, 00:23:35.859 "get_zone_info": false, 00:23:35.859 "zone_management": false, 00:23:35.859 "zone_append": false, 00:23:35.859 "compare": false, 00:23:35.859 "compare_and_write": false, 00:23:35.859 "abort": false, 00:23:35.859 "seek_hole": true, 00:23:35.859 "seek_data": true, 00:23:35.859 "copy": false, 00:23:35.859 "nvme_iov_md": false 00:23:35.859 }, 00:23:35.859 "driver_specific": { 00:23:35.859 "lvol": { 00:23:35.859 "lvol_store_uuid": "f3788ede-fdef-4c56-90f1-ec61c0f50730", 00:23:35.859 "base_bdev": "nvme0n1", 00:23:35.859 "thin_provision": true, 00:23:35.859 "num_allocated_clusters": 0, 00:23:35.859 "snapshot": false, 00:23:35.859 "clone": false, 00:23:35.859 "esnap_clone": false 00:23:35.859 } 00:23:35.859 } 00:23:35.859 } 00:23:35.859 ]' 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:23:35.859 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:36.119 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:36.119 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:36.119 13:49:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:23:36.119 13:49:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:23:36.119 13:49:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0dd0f7d3-683f-4128-9757-cf7f0dcb34a1 -c nvc0n1p0 --l2p_dram_limit 20 00:23:36.119 [2024-11-06 13:49:30.034901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.034989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.119 [2024-11-06 13:49:30.035010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:36.119 [2024-11-06 13:49:30.035042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.035134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.035156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.119 [2024-11-06 13:49:30.035169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:36.119 [2024-11-06 13:49:30.035184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.035215] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.119 [2024-11-06 13:49:30.036368] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.119 [2024-11-06 13:49:30.036400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.036416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.119 [2024-11-06 13:49:30.036428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:23:36.119 [2024-11-06 13:49:30.036442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.036532] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5dddf63f-e220-4d06-8bbd-f04873eff1de 00:23:36.119 [2024-11-06 13:49:30.039135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.039359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:36.119 [2024-11-06 13:49:30.039391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:36.119 [2024-11-06 13:49:30.039409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.054298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.054540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.119 [2024-11-06 13:49:30.054573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.796 ms 00:23:36.119 [2024-11-06 13:49:30.054587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.054724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.054739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.119 [2024-11-06 13:49:30.054762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:36.119 [2024-11-06 13:49:30.054774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.054847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.054860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.119 [2024-11-06 13:49:30.054877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:36.119 [2024-11-06 13:49:30.054889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.054921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.119 [2024-11-06 13:49:30.061712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.061861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.119 [2024-11-06 13:49:30.061882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.805 ms 00:23:36.119 [2024-11-06 13:49:30.061901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.061940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.061956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.119 [2024-11-06 13:49:30.061968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:36.119 [2024-11-06 13:49:30.061982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.062016] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:36.119 [2024-11-06 13:49:30.062185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.119 [2024-11-06 13:49:30.062201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.119 [2024-11-06 13:49:30.062221] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.119 [2024-11-06 13:49:30.062235] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062251] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062263] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.119 [2024-11-06 13:49:30.062278] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.119 [2024-11-06 13:49:30.062288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.119 [2024-11-06 13:49:30.062303] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.119 [2024-11-06 13:49:30.062314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.062341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.119 [2024-11-06 13:49:30.062369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:23:36.119 [2024-11-06 13:49:30.062386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.062468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.119 [2024-11-06 13:49:30.062486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.119 [2024-11-06 13:49:30.062498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:36.119 [2024-11-06 13:49:30.062517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.119 [2024-11-06 13:49:30.062609] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.119 [2024-11-06 13:49:30.062626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.119 [2024-11-06 13:49:30.062642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.119 [2024-11-06 13:49:30.062683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.119 [2024-11-06 13:49:30.062719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.119 [2024-11-06 13:49:30.062748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.119 [2024-11-06 13:49:30.062764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.119 [2024-11-06 13:49:30.062775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.119 [2024-11-06 13:49:30.062808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.119 [2024-11-06 13:49:30.062819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:36.119 [2024-11-06 13:49:30.062838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.119 [2024-11-06 13:49:30.062865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.119 [2024-11-06 13:49:30.062901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.119 [2024-11-06 13:49:30.062939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.119 [2024-11-06 13:49:30.062974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.119 [2024-11-06 13:49:30.062987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.119 [2024-11-06 13:49:30.062997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.119 [2024-11-06 13:49:30.063011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:36.119 [2024-11-06 13:49:30.063036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.119 [2024-11-06 13:49:30.063054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.119 [2024-11-06 13:49:30.063064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:36.119 [2024-11-06 13:49:30.063079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.119 [2024-11-06 13:49:30.063089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.119 [2024-11-06 13:49:30.063103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:36.119 [2024-11-06 13:49:30.063113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.119 [2024-11-06 13:49:30.063127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.119 [2024-11-06 13:49:30.063137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:36.119 [2024-11-06 13:49:30.063151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.120 [2024-11-06 13:49:30.063162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.120 [2024-11-06 13:49:30.063176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:36.120 [2024-11-06 13:49:30.063187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.120 [2024-11-06 13:49:30.063201] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.120 [2024-11-06 13:49:30.063213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.120 [2024-11-06 13:49:30.063229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.120 [2024-11-06 13:49:30.063241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.120 [2024-11-06 13:49:30.063259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.120 [2024-11-06 13:49:30.063270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.120 [2024-11-06 13:49:30.063284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.120 [2024-11-06 13:49:30.063294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.120 [2024-11-06 13:49:30.063308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.120 [2024-11-06 13:49:30.063319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.120 [2024-11-06 13:49:30.063341] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.120 [2024-11-06 13:49:30.063355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.120 [2024-11-06 13:49:30.063373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.120 [2024-11-06 13:49:30.063385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:36.120 [2024-11-06 13:49:30.063401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:36.120 [2024-11-06 13:49:30.063413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:36.120 [2024-11-06 13:49:30.063428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:36.120 [2024-11-06 13:49:30.063449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:36.120 [2024-11-06 13:49:30.063463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:36.120 [2024-11-06 13:49:30.063474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:36.120 [2024-11-06 13:49:30.063491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:36.120 [2024-11-06 13:49:30.063502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:36.120 [2024-11-06 13:49:30.063515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:36.120 [2024-11-06 13:49:30.063526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:36.120 [2024-11-06 13:49:30.063539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:36.120 [2024-11-06 13:49:30.063550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:36.120 [2024-11-06 13:49:30.063565] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.120 [2024-11-06 13:49:30.063577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.120 [2024-11-06 13:49:30.063592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.120 [2024-11-06 13:49:30.063603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.120 [2024-11-06 13:49:30.063616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.120 [2024-11-06 13:49:30.063627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.120 [2024-11-06 13:49:30.063642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.120 [2024-11-06 13:49:30.063656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.120 [2024-11-06 13:49:30.063671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:23:36.120 [2024-11-06 13:49:30.063681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.120 [2024-11-06 13:49:30.063730] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:36.120 [2024-11-06 13:49:30.063750] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:39.403 [2024-11-06 13:49:33.010282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.010376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:39.403 [2024-11-06 13:49:33.010414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2946.528 ms 00:23:39.403 [2024-11-06 13:49:33.010433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.063857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.063924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:39.403 [2024-11-06 13:49:33.063946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.052 ms 00:23:39.403 [2024-11-06 13:49:33.063958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.064167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.064184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:39.403 [2024-11-06 13:49:33.064204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:39.403 [2024-11-06 13:49:33.064216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.136923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.136988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:39.403 [2024-11-06 13:49:33.137013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.654 ms 00:23:39.403 [2024-11-06 13:49:33.137039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.137110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.137129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:39.403 [2024-11-06 13:49:33.137148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:39.403 [2024-11-06 13:49:33.137160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.138122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.138144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:39.403 [2024-11-06 13:49:33.138161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:23:39.403 [2024-11-06 13:49:33.138173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.138315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.138340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:39.403 [2024-11-06 13:49:33.138361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:23:39.403 [2024-11-06 13:49:33.138373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.164067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.164118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:39.403 [2024-11-06 13:49:33.164138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.667 ms 00:23:39.403 [2024-11-06 13:49:33.164150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.180626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:39.403 [2024-11-06 13:49:33.190936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.190987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:39.403 [2024-11-06 13:49:33.191004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.645 ms 00:23:39.403 [2024-11-06 13:49:33.191034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.280190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.280268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:39.403 [2024-11-06 13:49:33.280287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.096 ms 00:23:39.403 [2024-11-06 13:49:33.280302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.280521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.280543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:39.403 [2024-11-06 13:49:33.280556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:23:39.403 [2024-11-06 13:49:33.280570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.403 [2024-11-06 13:49:33.317782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.403 [2024-11-06 13:49:33.317847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:39.403 [2024-11-06 13:49:33.317863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.154 ms 00:23:39.403 [2024-11-06 13:49:33.317878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.404 [2024-11-06 13:49:33.354187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.404 [2024-11-06 13:49:33.354230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:39.404 [2024-11-06 13:49:33.354246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.267 ms 00:23:39.404 [2024-11-06 13:49:33.354260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.404 [2024-11-06 13:49:33.355011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.404 [2024-11-06 13:49:33.355048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:39.404 [2024-11-06 13:49:33.355060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:23:39.404 [2024-11-06 13:49:33.355074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.663 [2024-11-06 13:49:33.457837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.663 [2024-11-06 13:49:33.458082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:39.663 [2024-11-06 13:49:33.458105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.709 ms 00:23:39.663 [2024-11-06 13:49:33.458121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.663 [2024-11-06 13:49:33.499576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.663 [2024-11-06 13:49:33.499634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:39.663 [2024-11-06 13:49:33.499655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.334 ms 00:23:39.663 [2024-11-06 13:49:33.499671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.663 [2024-11-06 13:49:33.539489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.663 [2024-11-06 13:49:33.539542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:39.663 [2024-11-06 13:49:33.539557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.774 ms 00:23:39.663 [2024-11-06 13:49:33.539571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.663 [2024-11-06 13:49:33.577983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.663 [2024-11-06 13:49:33.578040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:39.663 [2024-11-06 13:49:33.578056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.353 ms 00:23:39.663 [2024-11-06 13:49:33.578071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.663 [2024-11-06 13:49:33.578118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.663 [2024-11-06 13:49:33.578139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:39.663 [2024-11-06 13:49:33.578153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:39.663 [2024-11-06 13:49:33.578167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.663 [2024-11-06 13:49:33.578287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.663 [2024-11-06 13:49:33.578304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:39.663 [2024-11-06 13:49:33.578316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:39.663 [2024-11-06 13:49:33.578338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.663 [2024-11-06 13:49:33.579785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3544.333 ms, result 0 00:23:39.663 { 00:23:39.663 "name": "ftl0", 00:23:39.663 "uuid": "5dddf63f-e220-4d06-8bbd-f04873eff1de" 00:23:39.663 } 00:23:39.663 13:49:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:39.663 13:49:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:23:39.663 13:49:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:23:39.921 13:49:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:40.180 [2024-11-06 13:49:33.996078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:40.180 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:40.180 Zero copy mechanism will not be used. 00:23:40.180 Running I/O for 4 seconds... 00:23:42.052 1788.00 IOPS, 118.73 MiB/s [2024-11-06T13:49:37.412Z] 1870.50 IOPS, 124.21 MiB/s [2024-11-06T13:49:38.348Z] 1912.67 IOPS, 127.01 MiB/s [2024-11-06T13:49:38.348Z] 1933.75 IOPS, 128.41 MiB/s 00:23:44.365 Latency(us) 00:23:44.365 [2024-11-06T13:49:38.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.365 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:44.365 ftl0 : 4.00 1933.03 128.37 0.00 0.00 541.38 194.07 2059.70 00:23:44.365 [2024-11-06T13:49:38.348Z] =================================================================================================================== 00:23:44.365 [2024-11-06T13:49:38.348Z] Total : 1933.03 128.37 0.00 0.00 541.38 194.07 2059.70 00:23:44.365 [2024-11-06 13:49:38.009566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:44.365 { 00:23:44.365 "results": [ 00:23:44.365 { 00:23:44.365 "job": "ftl0", 00:23:44.365 "core_mask": "0x1", 00:23:44.365 "workload": "randwrite", 00:23:44.365 "status": "finished", 00:23:44.365 "queue_depth": 1, 00:23:44.365 "io_size": 69632, 00:23:44.365 "runtime": 4.002001, 00:23:44.365 "iops": 1933.0330002416292, 00:23:44.365 "mibps": 128.36547267229568, 00:23:44.365 "io_failed": 0, 00:23:44.365 "io_timeout": 0, 00:23:44.365 "avg_latency_us": 541.3785433594327, 00:23:44.365 "min_latency_us": 194.07238095238094, 00:23:44.365 "max_latency_us": 2059.7028571428573 00:23:44.365 } 00:23:44.365 ], 00:23:44.365 "core_count": 1 00:23:44.365 } 00:23:44.365 13:49:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:44.365 [2024-11-06 13:49:38.184021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:44.365 Running I/O for 4 seconds... 00:23:46.262 10706.00 IOPS, 41.82 MiB/s [2024-11-06T13:49:41.620Z] 10917.50 IOPS, 42.65 MiB/s [2024-11-06T13:49:42.553Z] 11057.33 IOPS, 43.19 MiB/s [2024-11-06T13:49:42.553Z] 11093.25 IOPS, 43.33 MiB/s 00:23:48.570 Latency(us) 00:23:48.570 [2024-11-06T13:49:42.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.570 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:48.570 ftl0 : 4.02 11080.72 43.28 0.00 0.00 11527.82 233.08 21346.01 00:23:48.570 [2024-11-06T13:49:42.553Z] =================================================================================================================== 00:23:48.570 [2024-11-06T13:49:42.553Z] Total : 11080.72 43.28 0.00 0.00 11527.82 0.00 21346.01 00:23:48.570 [2024-11-06 13:49:42.210472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:48.570 { 00:23:48.570 "results": [ 00:23:48.570 { 00:23:48.570 "job": "ftl0", 00:23:48.570 "core_mask": "0x1", 00:23:48.570 "workload": "randwrite", 00:23:48.570 "status": "finished", 00:23:48.570 "queue_depth": 128, 00:23:48.570 "io_size": 4096, 00:23:48.570 "runtime": 4.015625, 00:23:48.570 "iops": 11080.715953307394, 00:23:48.570 "mibps": 43.284046692607006, 00:23:48.570 "io_failed": 0, 00:23:48.570 "io_timeout": 0, 00:23:48.570 "avg_latency_us": 11527.81646889608, 00:23:48.570 "min_latency_us": 233.08190476190475, 00:23:48.570 "max_latency_us": 21346.01142857143 00:23:48.570 } 00:23:48.570 ], 00:23:48.570 "core_count": 1 00:23:48.570 } 00:23:48.570 13:49:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:48.570 [2024-11-06 13:49:42.366106] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:48.570 Running I/O for 4 seconds... 00:23:50.439 8120.00 IOPS, 31.72 MiB/s [2024-11-06T13:49:45.797Z] 8226.50 IOPS, 32.13 MiB/s [2024-11-06T13:49:46.733Z] 8193.00 IOPS, 32.00 MiB/s [2024-11-06T13:49:46.733Z] 8330.00 IOPS, 32.54 MiB/s 00:23:52.750 Latency(us) 00:23:52.750 [2024-11-06T13:49:46.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.750 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:52.750 Verification LBA range: start 0x0 length 0x1400000 00:23:52.751 ftl0 : 4.01 8341.74 32.58 0.00 0.00 15299.07 261.36 19223.89 00:23:52.751 [2024-11-06T13:49:46.734Z] =================================================================================================================== 00:23:52.751 [2024-11-06T13:49:46.734Z] Total : 8341.74 32.58 0.00 0.00 15299.07 0.00 19223.89 00:23:52.751 [2024-11-06 13:49:46.397684] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:52.751 { 00:23:52.751 "results": [ 00:23:52.751 { 00:23:52.751 "job": "ftl0", 00:23:52.751 "core_mask": "0x1", 00:23:52.751 "workload": "verify", 00:23:52.751 "status": "finished", 00:23:52.751 "verify_range": { 00:23:52.751 "start": 0, 00:23:52.751 "length": 20971520 00:23:52.751 }, 00:23:52.751 "queue_depth": 128, 00:23:52.751 "io_size": 4096, 00:23:52.751 "runtime": 4.009477, 00:23:52.751 "iops": 8341.736341173675, 00:23:52.751 "mibps": 32.58490758270967, 00:23:52.751 "io_failed": 0, 00:23:52.751 "io_timeout": 0, 00:23:52.751 "avg_latency_us": 15299.065872322977, 00:23:52.751 "min_latency_us": 261.36380952380955, 00:23:52.751 "max_latency_us": 19223.893333333333 00:23:52.751 } 00:23:52.751 ], 00:23:52.751 "core_count": 1 00:23:52.751 } 00:23:52.751 13:49:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:52.751 [2024-11-06 13:49:46.594417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.751 [2024-11-06 13:49:46.594738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:52.751 [2024-11-06 13:49:46.594767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:52.751 [2024-11-06 13:49:46.594782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.751 [2024-11-06 13:49:46.594827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:52.751 [2024-11-06 13:49:46.599930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.751 [2024-11-06 13:49:46.599963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:52.751 [2024-11-06 13:49:46.599979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.079 ms 00:23:52.751 [2024-11-06 13:49:46.599990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.751 [2024-11-06 13:49:46.601870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.751 [2024-11-06 13:49:46.602035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:52.751 [2024-11-06 13:49:46.602064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.848 ms 00:23:52.751 [2024-11-06 13:49:46.602084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.011 [2024-11-06 13:49:46.770903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.011 [2024-11-06 13:49:46.770947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.011 [2024-11-06 13:49:46.770972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 168.787 ms 00:23:53.011 [2024-11-06 13:49:46.770983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.011 [2024-11-06 13:49:46.776123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.011 [2024-11-06 13:49:46.776288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:53.011 [2024-11-06 13:49:46.776315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.099 ms 00:23:53.011 [2024-11-06 13:49:46.776327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.011 [2024-11-06 13:49:46.815373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.011 [2024-11-06 13:49:46.815416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.011 [2024-11-06 13:49:46.815436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.965 ms 00:23:53.011 [2024-11-06 13:49:46.815447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.011 [2024-11-06 13:49:46.839384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.011 [2024-11-06 13:49:46.839449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.011 [2024-11-06 13:49:46.839470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.890 ms 00:23:53.011 [2024-11-06 13:49:46.839483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.011 [2024-11-06 13:49:46.839640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.011 [2024-11-06 13:49:46.839655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.011 [2024-11-06 13:49:46.839676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:23:53.011 [2024-11-06 13:49:46.839687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.011 [2024-11-06 13:49:46.877042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.011 [2024-11-06 13:49:46.877082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:53.011 [2024-11-06 13:49:46.877100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.331 ms 00:23:53.011 [2024-11-06 13:49:46.877111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.011 [2024-11-06 13:49:46.913574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.011 [2024-11-06 13:49:46.913613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:53.011 [2024-11-06 13:49:46.913630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.417 ms 00:23:53.012 [2024-11-06 13:49:46.913641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-06 13:49:46.950484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-06 13:49:46.950522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.012 [2024-11-06 13:49:46.950539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.796 ms 00:23:53.012 [2024-11-06 13:49:46.950550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-06 13:49:46.985728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-06 13:49:46.985765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.012 [2024-11-06 13:49:46.985787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.076 ms 00:23:53.012 [2024-11-06 13:49:46.985796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-06 13:49:46.985839] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.012 [2024-11-06 13:49:46.985858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.985992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.012 [2024-11-06 13:49:46.986904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.986922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.986932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.986947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.986958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.986972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.986982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.986996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-06 13:49:46.987232] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.013 [2024-11-06 13:49:46.987245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5dddf63f-e220-4d06-8bbd-f04873eff1de 00:23:53.013 [2024-11-06 13:49:46.987261] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:53.013 [2024-11-06 13:49:46.987278] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:53.013 [2024-11-06 13:49:46.987289] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:53.013 [2024-11-06 13:49:46.987303] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:53.013 [2024-11-06 13:49:46.987313] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.013 [2024-11-06 13:49:46.987328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.013 [2024-11-06 13:49:46.987338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.013 [2024-11-06 13:49:46.987354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.013 [2024-11-06 13:49:46.987363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.013 [2024-11-06 13:49:46.987377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-06 13:49:46.987388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.013 [2024-11-06 13:49:46.987422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:23:53.013 [2024-11-06 13:49:46.987432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.273 [2024-11-06 13:49:47.008754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.273 [2024-11-06 13:49:47.008951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.273 [2024-11-06 13:49:47.008978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.266 ms 00:23:53.273 [2024-11-06 13:49:47.008990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.273 [2024-11-06 13:49:47.009644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.273 [2024-11-06 13:49:47.009661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.273 [2024-11-06 13:49:47.009677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:23:53.273 [2024-11-06 13:49:47.009688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.273 [2024-11-06 13:49:47.067838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.273 [2024-11-06 13:49:47.067900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.273 [2024-11-06 13:49:47.067922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.273 [2024-11-06 13:49:47.067933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.273 [2024-11-06 13:49:47.068049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.273 [2024-11-06 13:49:47.068061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.273 [2024-11-06 13:49:47.068077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.273 [2024-11-06 13:49:47.068087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.273 [2024-11-06 13:49:47.068218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.273 [2024-11-06 13:49:47.068233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.273 [2024-11-06 13:49:47.068248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.273 [2024-11-06 13:49:47.068258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.273 [2024-11-06 13:49:47.068282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.273 [2024-11-06 13:49:47.068292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.273 [2024-11-06 13:49:47.068306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.273 [2024-11-06 13:49:47.068316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.273 [2024-11-06 13:49:47.206538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.273 [2024-11-06 13:49:47.206620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.273 [2024-11-06 13:49:47.206644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.273 [2024-11-06 13:49:47.206656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.312384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.533 [2024-11-06 13:49:47.312464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.533 [2024-11-06 13:49:47.312485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.533 [2024-11-06 13:49:47.312497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.312655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.533 [2024-11-06 13:49:47.312673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.533 [2024-11-06 13:49:47.312689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.533 [2024-11-06 13:49:47.312700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.312793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.533 [2024-11-06 13:49:47.312808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.533 [2024-11-06 13:49:47.312823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.533 [2024-11-06 13:49:47.312834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.312955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.533 [2024-11-06 13:49:47.312969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.533 [2024-11-06 13:49:47.312993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.533 [2024-11-06 13:49:47.313004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.313072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.533 [2024-11-06 13:49:47.313085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.533 [2024-11-06 13:49:47.313100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.533 [2024-11-06 13:49:47.313110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.313161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.533 [2024-11-06 13:49:47.313174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.533 [2024-11-06 13:49:47.313192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.533 [2024-11-06 13:49:47.313203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.313259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.533 [2024-11-06 13:49:47.313281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.533 [2024-11-06 13:49:47.313296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.533 [2024-11-06 13:49:47.313306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.533 [2024-11-06 13:49:47.313471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 719.003 ms, result 0 00:23:53.533 true 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75554 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 75554 ']' 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 75554 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75554 00:23:53.533 killing process with pid 75554 00:23:53.533 Received shutdown signal, test time was about 4.000000 seconds 00:23:53.533 00:23:53.533 Latency(us) 00:23:53.533 [2024-11-06T13:49:47.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.533 [2024-11-06T13:49:47.516Z] =================================================================================================================== 00:23:53.533 [2024-11-06T13:49:47.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75554' 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 75554 00:23:53.533 13:49:47 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 75554 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:57.721 Remove shared memory files 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:57.721 ************************************ 00:23:57.721 END TEST ftl_bdevperf 00:23:57.721 ************************************ 00:23:57.721 00:23:57.721 real 0m25.622s 00:23:57.721 user 0m28.467s 00:23:57.721 sys 0m1.357s 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:57.721 13:49:51 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:57.721 13:49:51 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:57.721 13:49:51 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:57.721 13:49:51 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:57.721 13:49:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:57.721 ************************************ 00:23:57.721 START TEST ftl_trim 00:23:57.721 ************************************ 00:23:57.721 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:57.721 * Looking for test storage... 00:23:57.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:57.721 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:57.721 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:23:57.721 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:57.721 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.721 13:49:51 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:57.721 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.721 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:57.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.721 --rc genhtml_branch_coverage=1 00:23:57.721 --rc genhtml_function_coverage=1 00:23:57.722 --rc genhtml_legend=1 00:23:57.722 --rc geninfo_all_blocks=1 00:23:57.722 --rc geninfo_unexecuted_blocks=1 00:23:57.722 00:23:57.722 ' 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:57.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.722 --rc genhtml_branch_coverage=1 00:23:57.722 --rc genhtml_function_coverage=1 00:23:57.722 --rc genhtml_legend=1 00:23:57.722 --rc geninfo_all_blocks=1 00:23:57.722 --rc geninfo_unexecuted_blocks=1 00:23:57.722 00:23:57.722 ' 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:57.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.722 --rc genhtml_branch_coverage=1 00:23:57.722 --rc genhtml_function_coverage=1 00:23:57.722 --rc genhtml_legend=1 00:23:57.722 --rc geninfo_all_blocks=1 00:23:57.722 --rc geninfo_unexecuted_blocks=1 00:23:57.722 00:23:57.722 ' 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:57.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.722 --rc genhtml_branch_coverage=1 00:23:57.722 --rc genhtml_function_coverage=1 00:23:57.722 --rc genhtml_legend=1 00:23:57.722 --rc geninfo_all_blocks=1 00:23:57.722 --rc geninfo_unexecuted_blocks=1 00:23:57.722 00:23:57.722 ' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75917 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75917 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75917 ']' 00:23:57.722 13:49:51 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:57.722 13:49:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:57.722 [2024-11-06 13:49:51.663280] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:23:57.722 [2024-11-06 13:49:51.663667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75917 ] 00:23:57.980 [2024-11-06 13:49:51.857851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.238 [2024-11-06 13:49:52.012886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.238 [2024-11-06 13:49:52.013120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.238 [2024-11-06 13:49:52.013289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.187 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:59.187 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:23:59.187 13:49:53 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:59.187 13:49:53 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:59.187 13:49:53 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:59.187 13:49:53 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:59.187 13:49:53 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:59.187 13:49:53 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:59.769 13:49:53 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:59.769 13:49:53 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:59.769 13:49:53 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:59.769 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:59.769 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:59.769 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:23:59.769 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:23:59.769 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:59.769 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:59.769 { 00:23:59.769 "name": "nvme0n1", 00:23:59.769 "aliases": [ 00:23:59.769 "9e1e4cca-72bd-480b-84a9-354d9c6d34d2" 00:23:59.769 ], 00:23:59.769 "product_name": "NVMe disk", 00:23:59.769 "block_size": 4096, 00:23:59.769 "num_blocks": 1310720, 00:23:59.769 "uuid": "9e1e4cca-72bd-480b-84a9-354d9c6d34d2", 00:23:59.769 "numa_id": -1, 00:23:59.769 "assigned_rate_limits": { 00:23:59.769 "rw_ios_per_sec": 0, 00:23:59.769 "rw_mbytes_per_sec": 0, 00:23:59.769 "r_mbytes_per_sec": 0, 00:23:59.769 "w_mbytes_per_sec": 0 00:23:59.769 }, 00:23:59.769 "claimed": true, 00:23:59.769 "claim_type": "read_many_write_one", 00:23:59.769 "zoned": false, 00:23:59.769 "supported_io_types": { 00:23:59.769 "read": true, 00:23:59.769 "write": true, 00:23:59.769 "unmap": true, 00:23:59.769 "flush": true, 00:23:59.769 "reset": true, 00:23:59.769 "nvme_admin": true, 00:23:59.769 "nvme_io": true, 00:23:59.769 "nvme_io_md": false, 00:23:59.769 "write_zeroes": true, 00:23:59.769 "zcopy": false, 00:23:59.769 "get_zone_info": false, 00:23:59.769 "zone_management": false, 00:23:59.769 "zone_append": false, 00:23:59.769 "compare": true, 00:23:59.769 "compare_and_write": false, 00:23:59.769 "abort": true, 00:23:59.769 "seek_hole": false, 00:23:59.769 "seek_data": false, 00:23:59.769 "copy": true, 00:23:59.769 "nvme_iov_md": false 00:23:59.769 }, 00:23:59.769 "driver_specific": { 00:23:59.769 "nvme": [ 00:23:59.769 { 00:23:59.769 "pci_address": "0000:00:11.0", 00:23:59.769 "trid": { 00:23:59.769 "trtype": "PCIe", 00:23:59.769 "traddr": "0000:00:11.0" 00:23:59.769 }, 00:23:59.769 "ctrlr_data": { 00:23:59.769 "cntlid": 0, 00:23:59.769 "vendor_id": "0x1b36", 00:23:59.769 "model_number": "QEMU NVMe Ctrl", 00:23:59.769 "serial_number": "12341", 00:23:59.769 "firmware_revision": "8.0.0", 00:23:59.769 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:59.769 "oacs": { 00:23:59.769 "security": 0, 00:23:59.769 "format": 1, 00:23:59.769 "firmware": 0, 00:23:59.769 "ns_manage": 1 00:23:59.769 }, 00:23:59.769 "multi_ctrlr": false, 00:23:59.769 "ana_reporting": false 00:23:59.769 }, 00:23:59.769 "vs": { 00:23:59.769 "nvme_version": "1.4" 00:23:59.769 }, 00:23:59.769 "ns_data": { 00:23:59.769 "id": 1, 00:23:59.769 "can_share": false 00:23:59.769 } 00:23:59.770 } 00:23:59.770 ], 00:23:59.770 "mp_policy": "active_passive" 00:23:59.770 } 00:23:59.770 } 00:23:59.770 ]' 00:23:59.770 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:00.029 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:24:00.029 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:00.029 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:24:00.029 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:24:00.029 13:49:53 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:24:00.029 13:49:53 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:24:00.029 13:49:53 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:00.029 13:49:53 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:24:00.029 13:49:53 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:00.029 13:49:53 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:00.288 13:49:54 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f3788ede-fdef-4c56-90f1-ec61c0f50730 00:24:00.288 13:49:54 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:24:00.288 13:49:54 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3788ede-fdef-4c56-90f1-ec61c0f50730 00:24:00.288 13:49:54 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:00.856 13:49:54 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1f7bd5ff-d486-441e-a5d5-203f3022a70d 00:24:00.856 13:49:54 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1f7bd5ff-d486-441e-a5d5-203f3022a70d 00:24:01.114 13:49:54 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.115 13:49:54 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.115 13:49:54 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:24:01.115 13:49:54 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:01.115 13:49:54 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.115 13:49:54 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:24:01.115 13:49:54 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.115 13:49:54 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.115 13:49:54 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:01.115 13:49:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:24:01.115 13:49:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:24:01.115 13:49:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.373 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:01.373 { 00:24:01.373 "name": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:01.373 "aliases": [ 00:24:01.374 "lvs/nvme0n1p0" 00:24:01.374 ], 00:24:01.374 "product_name": "Logical Volume", 00:24:01.374 "block_size": 4096, 00:24:01.374 "num_blocks": 26476544, 00:24:01.374 "uuid": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:01.374 "assigned_rate_limits": { 00:24:01.374 "rw_ios_per_sec": 0, 00:24:01.374 "rw_mbytes_per_sec": 0, 00:24:01.374 "r_mbytes_per_sec": 0, 00:24:01.374 "w_mbytes_per_sec": 0 00:24:01.374 }, 00:24:01.374 "claimed": false, 00:24:01.374 "zoned": false, 00:24:01.374 "supported_io_types": { 00:24:01.374 "read": true, 00:24:01.374 "write": true, 00:24:01.374 "unmap": true, 00:24:01.374 "flush": false, 00:24:01.374 "reset": true, 00:24:01.374 "nvme_admin": false, 00:24:01.374 "nvme_io": false, 00:24:01.374 "nvme_io_md": false, 00:24:01.374 "write_zeroes": true, 00:24:01.374 "zcopy": false, 00:24:01.374 "get_zone_info": false, 00:24:01.374 "zone_management": false, 00:24:01.374 "zone_append": false, 00:24:01.374 "compare": false, 00:24:01.374 "compare_and_write": false, 00:24:01.374 "abort": false, 00:24:01.374 "seek_hole": true, 00:24:01.374 "seek_data": true, 00:24:01.374 "copy": false, 00:24:01.374 "nvme_iov_md": false 00:24:01.374 }, 00:24:01.374 "driver_specific": { 00:24:01.374 "lvol": { 00:24:01.374 "lvol_store_uuid": "1f7bd5ff-d486-441e-a5d5-203f3022a70d", 00:24:01.374 "base_bdev": "nvme0n1", 00:24:01.374 "thin_provision": true, 00:24:01.374 "num_allocated_clusters": 0, 00:24:01.374 "snapshot": false, 00:24:01.374 "clone": false, 00:24:01.374 "esnap_clone": false 00:24:01.374 } 00:24:01.374 } 00:24:01.374 } 00:24:01.374 ]' 00:24:01.374 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:01.374 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:24:01.374 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:01.374 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:01.374 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:01.374 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:24:01.374 13:49:55 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:24:01.374 13:49:55 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:24:01.374 13:49:55 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:01.633 13:49:55 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:01.633 13:49:55 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:01.633 13:49:55 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.633 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.633 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:01.633 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:24:01.633 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:24:01.633 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:01.892 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:01.892 { 00:24:01.892 "name": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:01.892 "aliases": [ 00:24:01.892 "lvs/nvme0n1p0" 00:24:01.892 ], 00:24:01.892 "product_name": "Logical Volume", 00:24:01.892 "block_size": 4096, 00:24:01.892 "num_blocks": 26476544, 00:24:01.892 "uuid": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:01.892 "assigned_rate_limits": { 00:24:01.892 "rw_ios_per_sec": 0, 00:24:01.892 "rw_mbytes_per_sec": 0, 00:24:01.892 "r_mbytes_per_sec": 0, 00:24:01.892 "w_mbytes_per_sec": 0 00:24:01.892 }, 00:24:01.892 "claimed": false, 00:24:01.892 "zoned": false, 00:24:01.892 "supported_io_types": { 00:24:01.892 "read": true, 00:24:01.892 "write": true, 00:24:01.892 "unmap": true, 00:24:01.892 "flush": false, 00:24:01.892 "reset": true, 00:24:01.892 "nvme_admin": false, 00:24:01.892 "nvme_io": false, 00:24:01.892 "nvme_io_md": false, 00:24:01.892 "write_zeroes": true, 00:24:01.892 "zcopy": false, 00:24:01.892 "get_zone_info": false, 00:24:01.892 "zone_management": false, 00:24:01.892 "zone_append": false, 00:24:01.892 "compare": false, 00:24:01.892 "compare_and_write": false, 00:24:01.892 "abort": false, 00:24:01.892 "seek_hole": true, 00:24:01.892 "seek_data": true, 00:24:01.892 "copy": false, 00:24:01.892 "nvme_iov_md": false 00:24:01.892 }, 00:24:01.892 "driver_specific": { 00:24:01.892 "lvol": { 00:24:01.892 "lvol_store_uuid": "1f7bd5ff-d486-441e-a5d5-203f3022a70d", 00:24:01.892 "base_bdev": "nvme0n1", 00:24:01.892 "thin_provision": true, 00:24:01.892 "num_allocated_clusters": 0, 00:24:01.892 "snapshot": false, 00:24:01.892 "clone": false, 00:24:01.892 "esnap_clone": false 00:24:01.892 } 00:24:01.892 } 00:24:01.892 } 00:24:01.892 ]' 00:24:01.892 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:01.892 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:24:01.892 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:01.892 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:01.893 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:01.893 13:49:55 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:24:01.893 13:49:55 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:24:01.893 13:49:55 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:02.152 13:49:56 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:24:02.152 13:49:56 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:24:02.152 13:49:56 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:02.152 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:02.152 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:02.152 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:24:02.152 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:24:02.152 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6530223d-336e-4bdb-8c63-2bbdd6da7741 00:24:02.411 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:02.411 { 00:24:02.411 "name": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:02.411 "aliases": [ 00:24:02.411 "lvs/nvme0n1p0" 00:24:02.411 ], 00:24:02.411 "product_name": "Logical Volume", 00:24:02.411 "block_size": 4096, 00:24:02.411 "num_blocks": 26476544, 00:24:02.411 "uuid": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:02.411 "assigned_rate_limits": { 00:24:02.411 "rw_ios_per_sec": 0, 00:24:02.411 "rw_mbytes_per_sec": 0, 00:24:02.411 "r_mbytes_per_sec": 0, 00:24:02.411 "w_mbytes_per_sec": 0 00:24:02.411 }, 00:24:02.411 "claimed": false, 00:24:02.411 "zoned": false, 00:24:02.411 "supported_io_types": { 00:24:02.411 "read": true, 00:24:02.411 "write": true, 00:24:02.411 "unmap": true, 00:24:02.411 "flush": false, 00:24:02.411 "reset": true, 00:24:02.411 "nvme_admin": false, 00:24:02.411 "nvme_io": false, 00:24:02.411 "nvme_io_md": false, 00:24:02.411 "write_zeroes": true, 00:24:02.411 "zcopy": false, 00:24:02.411 "get_zone_info": false, 00:24:02.411 "zone_management": false, 00:24:02.411 "zone_append": false, 00:24:02.411 "compare": false, 00:24:02.411 "compare_and_write": false, 00:24:02.411 "abort": false, 00:24:02.411 "seek_hole": true, 00:24:02.411 "seek_data": true, 00:24:02.411 "copy": false, 00:24:02.411 "nvme_iov_md": false 00:24:02.411 }, 00:24:02.411 "driver_specific": { 00:24:02.411 "lvol": { 00:24:02.411 "lvol_store_uuid": "1f7bd5ff-d486-441e-a5d5-203f3022a70d", 00:24:02.411 "base_bdev": "nvme0n1", 00:24:02.411 "thin_provision": true, 00:24:02.411 "num_allocated_clusters": 0, 00:24:02.411 "snapshot": false, 00:24:02.411 "clone": false, 00:24:02.411 "esnap_clone": false 00:24:02.411 } 00:24:02.411 } 00:24:02.411 } 00:24:02.411 ]' 00:24:02.411 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:02.411 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:24:02.411 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:02.411 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:02.411 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:02.411 13:49:56 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:24:02.411 13:49:56 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:24:02.411 13:49:56 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6530223d-336e-4bdb-8c63-2bbdd6da7741 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:24:02.671 [2024-11-06 13:49:56.616379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.616441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:02.671 [2024-11-06 13:49:56.616463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:02.671 [2024-11-06 13:49:56.616475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.620592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.620636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.671 [2024-11-06 13:49:56.620652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.059 ms 00:24:02.671 [2024-11-06 13:49:56.620664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.620817] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:02.671 [2024-11-06 13:49:56.621905] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:02.671 [2024-11-06 13:49:56.621944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.621956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.671 [2024-11-06 13:49:56.621971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:24:02.671 [2024-11-06 13:49:56.621982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.622125] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 03c79bca-64f5-43d2-9303-3144f18687a3 00:24:02.671 [2024-11-06 13:49:56.624687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.624726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:02.671 [2024-11-06 13:49:56.624739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:02.671 [2024-11-06 13:49:56.624754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.639538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.639579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.671 [2024-11-06 13:49:56.639602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.629 ms 00:24:02.671 [2024-11-06 13:49:56.639616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.639817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.639837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.671 [2024-11-06 13:49:56.639849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:02.671 [2024-11-06 13:49:56.639869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.639937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.639954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:02.671 [2024-11-06 13:49:56.639966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:02.671 [2024-11-06 13:49:56.639994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.640087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:02.671 [2024-11-06 13:49:56.646902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.646942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.671 [2024-11-06 13:49:56.646959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.819 ms 00:24:02.671 [2024-11-06 13:49:56.646970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.647081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.647095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:02.671 [2024-11-06 13:49:56.647111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:02.671 [2024-11-06 13:49:56.647142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.671 [2024-11-06 13:49:56.647207] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:02.671 [2024-11-06 13:49:56.647347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:02.671 [2024-11-06 13:49:56.647371] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:02.671 [2024-11-06 13:49:56.647386] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:02.671 [2024-11-06 13:49:56.647404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:02.671 [2024-11-06 13:49:56.647417] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:02.671 [2024-11-06 13:49:56.647434] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:02.671 [2024-11-06 13:49:56.647445] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:02.671 [2024-11-06 13:49:56.647459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:02.671 [2024-11-06 13:49:56.647474] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:02.671 [2024-11-06 13:49:56.647488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.671 [2024-11-06 13:49:56.647500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:02.671 [2024-11-06 13:49:56.647516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:02.671 [2024-11-06 13:49:56.647527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.672 [2024-11-06 13:49:56.647655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.672 [2024-11-06 13:49:56.647666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:02.672 [2024-11-06 13:49:56.647681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:02.672 [2024-11-06 13:49:56.647692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.672 [2024-11-06 13:49:56.647868] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:02.672 [2024-11-06 13:49:56.647881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:02.672 [2024-11-06 13:49:56.647895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.672 [2024-11-06 13:49:56.647907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.672 [2024-11-06 13:49:56.647921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:02.672 [2024-11-06 13:49:56.647931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:02.672 [2024-11-06 13:49:56.647960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:02.672 [2024-11-06 13:49:56.647970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:02.672 [2024-11-06 13:49:56.647984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:02.672 [2024-11-06 13:49:56.647993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.672 [2024-11-06 13:49:56.648006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:02.672 [2024-11-06 13:49:56.648035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:02.672 [2024-11-06 13:49:56.648050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.672 [2024-11-06 13:49:56.648060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:02.672 [2024-11-06 13:49:56.648073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:02.672 [2024-11-06 13:49:56.648084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:02.672 [2024-11-06 13:49:56.648110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:02.672 [2024-11-06 13:49:56.648125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:02.672 [2024-11-06 13:49:56.648152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.672 [2024-11-06 13:49:56.648175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:02.672 [2024-11-06 13:49:56.648185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.672 [2024-11-06 13:49:56.648208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:02.672 [2024-11-06 13:49:56.648221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.672 [2024-11-06 13:49:56.648243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:02.672 [2024-11-06 13:49:56.648254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.672 [2024-11-06 13:49:56.648276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:02.672 [2024-11-06 13:49:56.648292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.672 [2024-11-06 13:49:56.648315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:02.672 [2024-11-06 13:49:56.648324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:02.672 [2024-11-06 13:49:56.648337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.672 [2024-11-06 13:49:56.648347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:02.672 [2024-11-06 13:49:56.648360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:02.672 [2024-11-06 13:49:56.648370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:02.672 [2024-11-06 13:49:56.648392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:02.672 [2024-11-06 13:49:56.648405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648414] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:02.672 [2024-11-06 13:49:56.648429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:02.672 [2024-11-06 13:49:56.648440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.672 [2024-11-06 13:49:56.648455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.672 [2024-11-06 13:49:56.648467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:02.672 [2024-11-06 13:49:56.648484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:02.672 [2024-11-06 13:49:56.648494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:02.672 [2024-11-06 13:49:56.648507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:02.672 [2024-11-06 13:49:56.648518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:02.672 [2024-11-06 13:49:56.648532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:02.672 [2024-11-06 13:49:56.648548] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:02.672 [2024-11-06 13:49:56.648565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.672 [2024-11-06 13:49:56.648581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:02.672 [2024-11-06 13:49:56.648596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:02.672 [2024-11-06 13:49:56.648607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:02.672 [2024-11-06 13:49:56.648622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:02.672 [2024-11-06 13:49:56.648633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:02.672 [2024-11-06 13:49:56.648647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:02.672 [2024-11-06 13:49:56.648657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:02.672 [2024-11-06 13:49:56.648671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:02.672 [2024-11-06 13:49:56.648682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:02.672 [2024-11-06 13:49:56.648699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:02.672 [2024-11-06 13:49:56.648710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:02.672 [2024-11-06 13:49:56.648724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:02.672 [2024-11-06 13:49:56.648734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:02.672 [2024-11-06 13:49:56.648750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:02.672 [2024-11-06 13:49:56.648760] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:02.672 [2024-11-06 13:49:56.648780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.672 [2024-11-06 13:49:56.648792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:02.672 [2024-11-06 13:49:56.648806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:02.672 [2024-11-06 13:49:56.648818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:02.672 [2024-11-06 13:49:56.648833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:02.672 [2024-11-06 13:49:56.648846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.672 [2024-11-06 13:49:56.648861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:02.672 [2024-11-06 13:49:56.648872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:24:02.672 [2024-11-06 13:49:56.648886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.672 [2024-11-06 13:49:56.649068] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:02.672 [2024-11-06 13:49:56.649090] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:05.956 [2024-11-06 13:49:59.917303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.956 [2024-11-06 13:49:59.917605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:05.956 [2024-11-06 13:49:59.917635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3268.212 ms 00:24:05.956 [2024-11-06 13:49:59.917651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:49:59.969920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:49:59.969987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.215 [2024-11-06 13:49:59.970006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.793 ms 00:24:06.215 [2024-11-06 13:49:59.970041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:49:59.970267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:49:59.970286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:06.215 [2024-11-06 13:49:59.970299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:06.215 [2024-11-06 13:49:59.970318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:50:00.042254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:50:00.042327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.215 [2024-11-06 13:50:00.042350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.827 ms 00:24:06.215 [2024-11-06 13:50:00.042368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:50:00.042567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:50:00.042586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.215 [2024-11-06 13:50:00.042598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:06.215 [2024-11-06 13:50:00.042613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:50:00.043472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:50:00.043500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.215 [2024-11-06 13:50:00.043512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:24:06.215 [2024-11-06 13:50:00.043527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:50:00.043690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:50:00.043712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.215 [2024-11-06 13:50:00.043724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:06.215 [2024-11-06 13:50:00.043741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:50:00.072144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:50:00.072407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.215 [2024-11-06 13:50:00.072523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.326 ms 00:24:06.215 [2024-11-06 13:50:00.072569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.215 [2024-11-06 13:50:00.089251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:06.215 [2024-11-06 13:50:00.117930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.215 [2024-11-06 13:50:00.118210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:06.215 [2024-11-06 13:50:00.118315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.092 ms 00:24:06.215 [2024-11-06 13:50:00.118382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.474 [2024-11-06 13:50:00.221222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.474 [2024-11-06 13:50:00.221506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:06.474 [2024-11-06 13:50:00.221627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.608 ms 00:24:06.474 [2024-11-06 13:50:00.221667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.474 [2024-11-06 13:50:00.222038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.474 [2024-11-06 13:50:00.222091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:06.474 [2024-11-06 13:50:00.222187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:24:06.474 [2024-11-06 13:50:00.222203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.474 [2024-11-06 13:50:00.261431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.474 [2024-11-06 13:50:00.261480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:06.474 [2024-11-06 13:50:00.261500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.154 ms 00:24:06.474 [2024-11-06 13:50:00.261512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.474 [2024-11-06 13:50:00.298717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.474 [2024-11-06 13:50:00.298758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:06.474 [2024-11-06 13:50:00.298779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.070 ms 00:24:06.474 [2024-11-06 13:50:00.298790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.474 [2024-11-06 13:50:00.299708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.474 [2024-11-06 13:50:00.299733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:06.474 [2024-11-06 13:50:00.299749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:24:06.474 [2024-11-06 13:50:00.299761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.474 [2024-11-06 13:50:00.413498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.474 [2024-11-06 13:50:00.413573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:06.474 [2024-11-06 13:50:00.413601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.662 ms 00:24:06.474 [2024-11-06 13:50:00.413613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.474 [2024-11-06 13:50:00.453968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.474 [2024-11-06 13:50:00.454246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:06.474 [2024-11-06 13:50:00.454277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.201 ms 00:24:06.474 [2024-11-06 13:50:00.454290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.733 [2024-11-06 13:50:00.494399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.733 [2024-11-06 13:50:00.494458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:06.733 [2024-11-06 13:50:00.494479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.897 ms 00:24:06.733 [2024-11-06 13:50:00.494490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.733 [2024-11-06 13:50:00.531906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.733 [2024-11-06 13:50:00.531956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:06.733 [2024-11-06 13:50:00.531977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.289 ms 00:24:06.733 [2024-11-06 13:50:00.532008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.733 [2024-11-06 13:50:00.532157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.733 [2024-11-06 13:50:00.532176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:06.733 [2024-11-06 13:50:00.532196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:06.733 [2024-11-06 13:50:00.532207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.733 [2024-11-06 13:50:00.532331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.733 [2024-11-06 13:50:00.532344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:06.733 [2024-11-06 13:50:00.532364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:06.733 [2024-11-06 13:50:00.532374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.733 [2024-11-06 13:50:00.533917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:06.733 [2024-11-06 13:50:00.538800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3917.166 ms, result 0 00:24:06.733 [2024-11-06 13:50:00.539825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:06.733 { 00:24:06.733 "name": "ftl0", 00:24:06.733 "uuid": "03c79bca-64f5-43d2-9303-3144f18687a3" 00:24:06.733 } 00:24:06.733 13:50:00 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:24:06.733 13:50:00 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:24:06.733 13:50:00 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:06.733 13:50:00 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:24:06.733 13:50:00 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:06.733 13:50:00 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:06.733 13:50:00 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:06.992 13:50:00 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:07.250 [ 00:24:07.250 { 00:24:07.250 "name": "ftl0", 00:24:07.250 "aliases": [ 00:24:07.250 "03c79bca-64f5-43d2-9303-3144f18687a3" 00:24:07.250 ], 00:24:07.250 "product_name": "FTL disk", 00:24:07.250 "block_size": 4096, 00:24:07.250 "num_blocks": 23592960, 00:24:07.250 "uuid": "03c79bca-64f5-43d2-9303-3144f18687a3", 00:24:07.250 "assigned_rate_limits": { 00:24:07.250 "rw_ios_per_sec": 0, 00:24:07.250 "rw_mbytes_per_sec": 0, 00:24:07.250 "r_mbytes_per_sec": 0, 00:24:07.250 "w_mbytes_per_sec": 0 00:24:07.250 }, 00:24:07.250 "claimed": false, 00:24:07.250 "zoned": false, 00:24:07.250 "supported_io_types": { 00:24:07.250 "read": true, 00:24:07.250 "write": true, 00:24:07.250 "unmap": true, 00:24:07.250 "flush": true, 00:24:07.250 "reset": false, 00:24:07.250 "nvme_admin": false, 00:24:07.250 "nvme_io": false, 00:24:07.250 "nvme_io_md": false, 00:24:07.250 "write_zeroes": true, 00:24:07.250 "zcopy": false, 00:24:07.250 "get_zone_info": false, 00:24:07.250 "zone_management": false, 00:24:07.250 "zone_append": false, 00:24:07.250 "compare": false, 00:24:07.250 "compare_and_write": false, 00:24:07.250 "abort": false, 00:24:07.250 "seek_hole": false, 00:24:07.250 "seek_data": false, 00:24:07.250 "copy": false, 00:24:07.250 "nvme_iov_md": false 00:24:07.250 }, 00:24:07.250 "driver_specific": { 00:24:07.250 "ftl": { 00:24:07.250 "base_bdev": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:07.250 "cache": "nvc0n1p0" 00:24:07.250 } 00:24:07.250 } 00:24:07.250 } 00:24:07.250 ] 00:24:07.250 13:50:01 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:24:07.250 13:50:01 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:24:07.250 13:50:01 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:07.509 13:50:01 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:24:07.509 13:50:01 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:24:07.768 13:50:01 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:24:07.768 { 00:24:07.768 "name": "ftl0", 00:24:07.768 "aliases": [ 00:24:07.768 "03c79bca-64f5-43d2-9303-3144f18687a3" 00:24:07.768 ], 00:24:07.768 "product_name": "FTL disk", 00:24:07.768 "block_size": 4096, 00:24:07.768 "num_blocks": 23592960, 00:24:07.768 "uuid": "03c79bca-64f5-43d2-9303-3144f18687a3", 00:24:07.768 "assigned_rate_limits": { 00:24:07.768 "rw_ios_per_sec": 0, 00:24:07.768 "rw_mbytes_per_sec": 0, 00:24:07.768 "r_mbytes_per_sec": 0, 00:24:07.768 "w_mbytes_per_sec": 0 00:24:07.768 }, 00:24:07.768 "claimed": false, 00:24:07.768 "zoned": false, 00:24:07.768 "supported_io_types": { 00:24:07.768 "read": true, 00:24:07.768 "write": true, 00:24:07.768 "unmap": true, 00:24:07.768 "flush": true, 00:24:07.768 "reset": false, 00:24:07.768 "nvme_admin": false, 00:24:07.768 "nvme_io": false, 00:24:07.768 "nvme_io_md": false, 00:24:07.768 "write_zeroes": true, 00:24:07.768 "zcopy": false, 00:24:07.768 "get_zone_info": false, 00:24:07.768 "zone_management": false, 00:24:07.768 "zone_append": false, 00:24:07.768 "compare": false, 00:24:07.768 "compare_and_write": false, 00:24:07.768 "abort": false, 00:24:07.768 "seek_hole": false, 00:24:07.768 "seek_data": false, 00:24:07.768 "copy": false, 00:24:07.768 "nvme_iov_md": false 00:24:07.768 }, 00:24:07.768 "driver_specific": { 00:24:07.768 "ftl": { 00:24:07.768 "base_bdev": "6530223d-336e-4bdb-8c63-2bbdd6da7741", 00:24:07.768 "cache": "nvc0n1p0" 00:24:07.768 } 00:24:07.768 } 00:24:07.768 } 00:24:07.769 ]' 00:24:07.769 13:50:01 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:24:07.769 13:50:01 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:24:07.769 13:50:01 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:08.027 [2024-11-06 13:50:01.824513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.027 [2024-11-06 13:50:01.824737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:08.027 [2024-11-06 13:50:01.824839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.027 [2024-11-06 13:50:01.824867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.027 [2024-11-06 13:50:01.824944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:08.027 [2024-11-06 13:50:01.829744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.027 [2024-11-06 13:50:01.829776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:08.027 [2024-11-06 13:50:01.829798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.774 ms 00:24:08.027 [2024-11-06 13:50:01.829810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.027 [2024-11-06 13:50:01.830863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.027 [2024-11-06 13:50:01.830884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:08.027 [2024-11-06 13:50:01.830900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:24:08.027 [2024-11-06 13:50:01.830911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.027 [2024-11-06 13:50:01.833783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.027 [2024-11-06 13:50:01.833809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:08.027 [2024-11-06 13:50:01.833824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:24:08.027 [2024-11-06 13:50:01.833834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.027 [2024-11-06 13:50:01.839640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.027 [2024-11-06 13:50:01.839803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:08.027 [2024-11-06 13:50:01.839829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:24:08.027 [2024-11-06 13:50:01.839840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.027 [2024-11-06 13:50:01.877805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.027 [2024-11-06 13:50:01.877976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:08.027 [2024-11-06 13:50:01.878009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.834 ms 00:24:08.027 [2024-11-06 13:50:01.878037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.028 [2024-11-06 13:50:01.901286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.028 [2024-11-06 13:50:01.901324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:08.028 [2024-11-06 13:50:01.901343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.121 ms 00:24:08.028 [2024-11-06 13:50:01.901359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.028 [2024-11-06 13:50:01.901695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.028 [2024-11-06 13:50:01.901710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:08.028 [2024-11-06 13:50:01.901726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:24:08.028 [2024-11-06 13:50:01.901738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.028 [2024-11-06 13:50:01.939163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.028 [2024-11-06 13:50:01.939205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:08.028 [2024-11-06 13:50:01.939225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.358 ms 00:24:08.028 [2024-11-06 13:50:01.939236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.028 [2024-11-06 13:50:01.979242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.028 [2024-11-06 13:50:01.979298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:08.028 [2024-11-06 13:50:01.979324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.893 ms 00:24:08.028 [2024-11-06 13:50:01.979336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.287 [2024-11-06 13:50:02.019120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.287 [2024-11-06 13:50:02.019308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:08.287 [2024-11-06 13:50:02.019393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.558 ms 00:24:08.287 [2024-11-06 13:50:02.019430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.287 [2024-11-06 13:50:02.056624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.287 [2024-11-06 13:50:02.056785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:08.287 [2024-11-06 13:50:02.056868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.935 ms 00:24:08.287 [2024-11-06 13:50:02.056904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.288 [2024-11-06 13:50:02.057057] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:08.288 [2024-11-06 13:50:02.057113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.057990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:08.288 [2024-11-06 13:50:02.058551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:08.289 [2024-11-06 13:50:02.058803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:08.289 [2024-11-06 13:50:02.058820] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03c79bca-64f5-43d2-9303-3144f18687a3 00:24:08.289 [2024-11-06 13:50:02.058833] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:08.289 [2024-11-06 13:50:02.058847] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:08.289 [2024-11-06 13:50:02.058859] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:08.289 [2024-11-06 13:50:02.058878] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:08.289 [2024-11-06 13:50:02.058889] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:08.289 [2024-11-06 13:50:02.058903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:08.289 [2024-11-06 13:50:02.058915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:08.289 [2024-11-06 13:50:02.058928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:08.289 [2024-11-06 13:50:02.058938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:08.289 [2024-11-06 13:50:02.058953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.289 [2024-11-06 13:50:02.058964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:08.289 [2024-11-06 13:50:02.058980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.899 ms 00:24:08.289 [2024-11-06 13:50:02.058991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.289 [2024-11-06 13:50:02.081311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.289 [2024-11-06 13:50:02.081361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:08.289 [2024-11-06 13:50:02.081383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.233 ms 00:24:08.289 [2024-11-06 13:50:02.081395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.289 [2024-11-06 13:50:02.082198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.289 [2024-11-06 13:50:02.082217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:08.289 [2024-11-06 13:50:02.082234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:24:08.289 [2024-11-06 13:50:02.082246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.289 [2024-11-06 13:50:02.160066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.289 [2024-11-06 13:50:02.160277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.289 [2024-11-06 13:50:02.160309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.289 [2024-11-06 13:50:02.160322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.289 [2024-11-06 13:50:02.160530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.289 [2024-11-06 13:50:02.160544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.289 [2024-11-06 13:50:02.160559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.289 [2024-11-06 13:50:02.160571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.289 [2024-11-06 13:50:02.160695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.289 [2024-11-06 13:50:02.160710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.289 [2024-11-06 13:50:02.160734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.289 [2024-11-06 13:50:02.160745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.289 [2024-11-06 13:50:02.160794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.289 [2024-11-06 13:50:02.160805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.289 [2024-11-06 13:50:02.160820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.289 [2024-11-06 13:50:02.160831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.309208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.309461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.549 [2024-11-06 13:50:02.309493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.309505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.420573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.420799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.549 [2024-11-06 13:50:02.420830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.420842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.421078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.421095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.549 [2024-11-06 13:50:02.421134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.421150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.421246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.421258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.549 [2024-11-06 13:50:02.421273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.421285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.421466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.421481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.549 [2024-11-06 13:50:02.421496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.421511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.421587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.421601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:08.549 [2024-11-06 13:50:02.421616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.421627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.421712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.421725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.549 [2024-11-06 13:50:02.421744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.421756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.421869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.549 [2024-11-06 13:50:02.421883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.549 [2024-11-06 13:50:02.421897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.549 [2024-11-06 13:50:02.421908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.549 [2024-11-06 13:50:02.422243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 597.712 ms, result 0 00:24:08.549 true 00:24:08.549 13:50:02 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75917 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75917 ']' 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75917 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75917 00:24:08.549 killing process with pid 75917 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75917' 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75917 00:24:08.549 13:50:02 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75917 00:24:15.115 13:50:07 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:24:15.115 65536+0 records in 00:24:15.115 65536+0 records out 00:24:15.115 268435456 bytes (268 MB, 256 MiB) copied, 1.12197 s, 239 MB/s 00:24:15.115 13:50:09 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:15.374 [2024-11-06 13:50:09.100253] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:24:15.374 [2024-11-06 13:50:09.100625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76138 ] 00:24:15.374 [2024-11-06 13:50:09.274600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.632 [2024-11-06 13:50:09.419496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.890 [2024-11-06 13:50:09.867143] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.890 [2024-11-06 13:50:09.867438] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.150 [2024-11-06 13:50:10.036578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.036643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:16.150 [2024-11-06 13:50:10.036663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:16.150 [2024-11-06 13:50:10.036674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.040356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.040396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.150 [2024-11-06 13:50:10.040410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:24:16.150 [2024-11-06 13:50:10.040421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.040533] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:16.150 [2024-11-06 13:50:10.041817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:16.150 [2024-11-06 13:50:10.042421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.042444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.150 [2024-11-06 13:50:10.042458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:24:16.150 [2024-11-06 13:50:10.042470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.045269] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:16.150 [2024-11-06 13:50:10.067125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.067173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:16.150 [2024-11-06 13:50:10.067190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.857 ms 00:24:16.150 [2024-11-06 13:50:10.067201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.067320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.067337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:16.150 [2024-11-06 13:50:10.067351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:16.150 [2024-11-06 13:50:10.067362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.080523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.080557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.150 [2024-11-06 13:50:10.080571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.111 ms 00:24:16.150 [2024-11-06 13:50:10.080590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.080733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.080751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.150 [2024-11-06 13:50:10.080764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:16.150 [2024-11-06 13:50:10.080775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.080812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.080828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:16.150 [2024-11-06 13:50:10.080841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:16.150 [2024-11-06 13:50:10.080852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.080882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:16.150 [2024-11-06 13:50:10.086757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.086792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.150 [2024-11-06 13:50:10.086806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.885 ms 00:24:16.150 [2024-11-06 13:50:10.086818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.086874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.086888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:16.150 [2024-11-06 13:50:10.086902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:16.150 [2024-11-06 13:50:10.086913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.086936] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:16.150 [2024-11-06 13:50:10.086966] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:16.150 [2024-11-06 13:50:10.087007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:16.150 [2024-11-06 13:50:10.087043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:16.150 [2024-11-06 13:50:10.087142] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:16.150 [2024-11-06 13:50:10.087157] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:16.150 [2024-11-06 13:50:10.087173] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:16.150 [2024-11-06 13:50:10.087187] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:16.150 [2024-11-06 13:50:10.087205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:16.150 [2024-11-06 13:50:10.087217] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:16.150 [2024-11-06 13:50:10.087228] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:16.150 [2024-11-06 13:50:10.087238] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:16.150 [2024-11-06 13:50:10.087249] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:16.150 [2024-11-06 13:50:10.087261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.087272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:16.150 [2024-11-06 13:50:10.087283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:24:16.150 [2024-11-06 13:50:10.087293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.087376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.150 [2024-11-06 13:50:10.087393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:16.150 [2024-11-06 13:50:10.087406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:16.150 [2024-11-06 13:50:10.087418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.150 [2024-11-06 13:50:10.087514] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:16.150 [2024-11-06 13:50:10.087528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:16.150 [2024-11-06 13:50:10.087539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.150 [2024-11-06 13:50:10.087551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.150 [2024-11-06 13:50:10.087563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:16.150 [2024-11-06 13:50:10.087574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:16.150 [2024-11-06 13:50:10.087584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:16.151 [2024-11-06 13:50:10.087596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:16.151 [2024-11-06 13:50:10.087606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.151 [2024-11-06 13:50:10.087627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:16.151 [2024-11-06 13:50:10.087637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:16.151 [2024-11-06 13:50:10.087648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.151 [2024-11-06 13:50:10.087673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:16.151 [2024-11-06 13:50:10.087683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:16.151 [2024-11-06 13:50:10.087694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:16.151 [2024-11-06 13:50:10.087714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:16.151 [2024-11-06 13:50:10.087723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:16.151 [2024-11-06 13:50:10.087744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.151 [2024-11-06 13:50:10.087764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:16.151 [2024-11-06 13:50:10.087774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.151 [2024-11-06 13:50:10.087794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:16.151 [2024-11-06 13:50:10.087803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.151 [2024-11-06 13:50:10.087824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:16.151 [2024-11-06 13:50:10.087833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.151 [2024-11-06 13:50:10.087853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:16.151 [2024-11-06 13:50:10.087862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.151 [2024-11-06 13:50:10.087880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:16.151 [2024-11-06 13:50:10.087890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:16.151 [2024-11-06 13:50:10.087899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.151 [2024-11-06 13:50:10.087909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:16.151 [2024-11-06 13:50:10.087919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:16.151 [2024-11-06 13:50:10.087928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:16.151 [2024-11-06 13:50:10.087946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:16.151 [2024-11-06 13:50:10.087956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.151 [2024-11-06 13:50:10.087966] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:16.151 [2024-11-06 13:50:10.087976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:16.151 [2024-11-06 13:50:10.087989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.151 [2024-11-06 13:50:10.088005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.151 [2024-11-06 13:50:10.088331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:16.151 [2024-11-06 13:50:10.088389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:16.151 [2024-11-06 13:50:10.088425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:16.151 [2024-11-06 13:50:10.088457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:16.151 [2024-11-06 13:50:10.088488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:16.151 [2024-11-06 13:50:10.088518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:16.151 [2024-11-06 13:50:10.088552] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:16.151 [2024-11-06 13:50:10.088606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.151 [2024-11-06 13:50:10.088782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:16.151 [2024-11-06 13:50:10.088840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:16.151 [2024-11-06 13:50:10.088890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:16.151 [2024-11-06 13:50:10.088995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:16.151 [2024-11-06 13:50:10.089065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:16.151 [2024-11-06 13:50:10.089115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:16.151 [2024-11-06 13:50:10.089164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:16.151 [2024-11-06 13:50:10.089334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:16.151 [2024-11-06 13:50:10.089384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:16.151 [2024-11-06 13:50:10.089434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:16.151 [2024-11-06 13:50:10.089532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:16.151 [2024-11-06 13:50:10.089630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:16.151 [2024-11-06 13:50:10.089685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:16.151 [2024-11-06 13:50:10.089755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:16.151 [2024-11-06 13:50:10.089769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:16.151 [2024-11-06 13:50:10.089784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.151 [2024-11-06 13:50:10.089797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:16.151 [2024-11-06 13:50:10.089809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:16.151 [2024-11-06 13:50:10.089820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:16.151 [2024-11-06 13:50:10.089831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:16.151 [2024-11-06 13:50:10.089845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.151 [2024-11-06 13:50:10.089857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:16.151 [2024-11-06 13:50:10.089878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.387 ms 00:24:16.151 [2024-11-06 13:50:10.089890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.144261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.144325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.411 [2024-11-06 13:50:10.144344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.291 ms 00:24:16.411 [2024-11-06 13:50:10.144357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.144588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.144603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:16.411 [2024-11-06 13:50:10.144617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:16.411 [2024-11-06 13:50:10.144629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.215619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.215683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.411 [2024-11-06 13:50:10.215707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.958 ms 00:24:16.411 [2024-11-06 13:50:10.215720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.215873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.215888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.411 [2024-11-06 13:50:10.215902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:16.411 [2024-11-06 13:50:10.215913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.216729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.216752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.411 [2024-11-06 13:50:10.216764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:24:16.411 [2024-11-06 13:50:10.216782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.216926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.216941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.411 [2024-11-06 13:50:10.216954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:24:16.411 [2024-11-06 13:50:10.216964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.242105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.242162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.411 [2024-11-06 13:50:10.242179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.110 ms 00:24:16.411 [2024-11-06 13:50:10.242190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.263452] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:16.411 [2024-11-06 13:50:10.263501] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:16.411 [2024-11-06 13:50:10.263519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.263532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:16.411 [2024-11-06 13:50:10.263545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.126 ms 00:24:16.411 [2024-11-06 13:50:10.263556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.295078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.295121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:16.411 [2024-11-06 13:50:10.295151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.426 ms 00:24:16.411 [2024-11-06 13:50:10.295163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.314620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.314818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:16.411 [2024-11-06 13:50:10.314840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.366 ms 00:24:16.411 [2024-11-06 13:50:10.314852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.333551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.333733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:16.411 [2024-11-06 13:50:10.333754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.616 ms 00:24:16.411 [2024-11-06 13:50:10.333766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.411 [2024-11-06 13:50:10.334631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.411 [2024-11-06 13:50:10.334654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:16.411 [2024-11-06 13:50:10.334668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:24:16.411 [2024-11-06 13:50:10.334680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.436248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.436337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:16.670 [2024-11-06 13:50:10.436358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.532 ms 00:24:16.670 [2024-11-06 13:50:10.436371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.449584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:16.670 [2024-11-06 13:50:10.477915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.477993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:16.670 [2024-11-06 13:50:10.478014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.355 ms 00:24:16.670 [2024-11-06 13:50:10.478039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.478254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.478275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:16.670 [2024-11-06 13:50:10.478289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:16.670 [2024-11-06 13:50:10.478300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.478389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.478403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:16.670 [2024-11-06 13:50:10.478416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:16.670 [2024-11-06 13:50:10.478427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.478476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.478492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:16.670 [2024-11-06 13:50:10.478507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:16.670 [2024-11-06 13:50:10.478519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.478563] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:16.670 [2024-11-06 13:50:10.478577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.478589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:16.670 [2024-11-06 13:50:10.478601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:16.670 [2024-11-06 13:50:10.478613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.517747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.517999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:16.670 [2024-11-06 13:50:10.518038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.107 ms 00:24:16.670 [2024-11-06 13:50:10.518052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.518202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.670 [2024-11-06 13:50:10.518218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:16.670 [2024-11-06 13:50:10.518230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:16.670 [2024-11-06 13:50:10.518241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.670 [2024-11-06 13:50:10.519582] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.670 [2024-11-06 13:50:10.524925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 482.644 ms, result 0 00:24:16.670 [2024-11-06 13:50:10.525925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:16.670 [2024-11-06 13:50:10.545361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:17.605  [2024-11-06T13:50:12.964Z] Copying: 33/256 [MB] (33 MBps) [2024-11-06T13:50:13.898Z] Copying: 65/256 [MB] (31 MBps) [2024-11-06T13:50:14.834Z] Copying: 97/256 [MB] (31 MBps) [2024-11-06T13:50:15.768Z] Copying: 127/256 [MB] (30 MBps) [2024-11-06T13:50:16.704Z] Copying: 159/256 [MB] (31 MBps) [2024-11-06T13:50:17.638Z] Copying: 189/256 [MB] (30 MBps) [2024-11-06T13:50:18.572Z] Copying: 221/256 [MB] (31 MBps) [2024-11-06T13:50:18.858Z] Copying: 251/256 [MB] (29 MBps) [2024-11-06T13:50:18.858Z] Copying: 256/256 [MB] (average 31 MBps)[2024-11-06 13:50:18.762744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:24.875 [2024-11-06 13:50:18.779110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.875 [2024-11-06 13:50:18.779173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:24.875 [2024-11-06 13:50:18.779195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:24.875 [2024-11-06 13:50:18.779209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.875 [2024-11-06 13:50:18.779265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:24.875 [2024-11-06 13:50:18.784128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.875 [2024-11-06 13:50:18.784165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:24.875 [2024-11-06 13:50:18.784182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.840 ms 00:24:24.875 [2024-11-06 13:50:18.784195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.875 [2024-11-06 13:50:18.786313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.875 [2024-11-06 13:50:18.786369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:24.875 [2024-11-06 13:50:18.786386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:24:24.875 [2024-11-06 13:50:18.786401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.875 [2024-11-06 13:50:18.792901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.875 [2024-11-06 13:50:18.792945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:24.875 [2024-11-06 13:50:18.792977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.475 ms 00:24:24.875 [2024-11-06 13:50:18.792991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.875 [2024-11-06 13:50:18.798850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.875 [2024-11-06 13:50:18.798893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:24.875 [2024-11-06 13:50:18.798909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.798 ms 00:24:24.875 [2024-11-06 13:50:18.798922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:18.838837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.136 [2024-11-06 13:50:18.838901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.136 [2024-11-06 13:50:18.838921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.840 ms 00:24:25.136 [2024-11-06 13:50:18.838935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:18.861329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.136 [2024-11-06 13:50:18.861400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.136 [2024-11-06 13:50:18.861435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.314 ms 00:24:25.136 [2024-11-06 13:50:18.861455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:18.861625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.136 [2024-11-06 13:50:18.861643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:25.136 [2024-11-06 13:50:18.861658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:24:25.136 [2024-11-06 13:50:18.861670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:18.900149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.136 [2024-11-06 13:50:18.900382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:25.136 [2024-11-06 13:50:18.900410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.453 ms 00:24:25.136 [2024-11-06 13:50:18.900424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:18.938096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.136 [2024-11-06 13:50:18.938147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:25.136 [2024-11-06 13:50:18.938164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.566 ms 00:24:25.136 [2024-11-06 13:50:18.938179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:18.975837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.136 [2024-11-06 13:50:18.976078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.136 [2024-11-06 13:50:18.976106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.584 ms 00:24:25.136 [2024-11-06 13:50:18.976119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:19.012457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.136 [2024-11-06 13:50:19.012506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.136 [2024-11-06 13:50:19.012524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.198 ms 00:24:25.136 [2024-11-06 13:50:19.012537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.136 [2024-11-06 13:50:19.012606] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.136 [2024-11-06 13:50:19.012638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.012992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.136 [2024-11-06 13:50:19.013484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.013998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.014012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.137 [2024-11-06 13:50:19.014048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.137 [2024-11-06 13:50:19.014062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03c79bca-64f5-43d2-9303-3144f18687a3 00:24:25.137 [2024-11-06 13:50:19.014076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:25.137 [2024-11-06 13:50:19.014089] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:25.137 [2024-11-06 13:50:19.014101] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:25.137 [2024-11-06 13:50:19.014114] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:25.137 [2024-11-06 13:50:19.014127] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.137 [2024-11-06 13:50:19.014142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.137 [2024-11-06 13:50:19.014155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.137 [2024-11-06 13:50:19.014166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.137 [2024-11-06 13:50:19.014178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.137 [2024-11-06 13:50:19.014191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.137 [2024-11-06 13:50:19.014204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.137 [2024-11-06 13:50:19.014225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:24:25.137 [2024-11-06 13:50:19.014237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.137 [2024-11-06 13:50:19.036441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.137 [2024-11-06 13:50:19.036487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.137 [2024-11-06 13:50:19.036505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.172 ms 00:24:25.137 [2024-11-06 13:50:19.036519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.137 [2024-11-06 13:50:19.037173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.137 [2024-11-06 13:50:19.037203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.137 [2024-11-06 13:50:19.037218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:24:25.137 [2024-11-06 13:50:19.037231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.137 [2024-11-06 13:50:19.098557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.137 [2024-11-06 13:50:19.098638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.137 [2024-11-06 13:50:19.098660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.137 [2024-11-06 13:50:19.098674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.137 [2024-11-06 13:50:19.098867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.137 [2024-11-06 13:50:19.098893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.137 [2024-11-06 13:50:19.098908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.137 [2024-11-06 13:50:19.098920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.137 [2024-11-06 13:50:19.099006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.137 [2024-11-06 13:50:19.099051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.137 [2024-11-06 13:50:19.099066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.137 [2024-11-06 13:50:19.099079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.137 [2024-11-06 13:50:19.099106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.137 [2024-11-06 13:50:19.099121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.137 [2024-11-06 13:50:19.099141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.137 [2024-11-06 13:50:19.099155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.239821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.239912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.396 [2024-11-06 13:50:19.239934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.396 [2024-11-06 13:50:19.239948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.354579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.354682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.396 [2024-11-06 13:50:19.354705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.396 [2024-11-06 13:50:19.354719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.354890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.354908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.396 [2024-11-06 13:50:19.354923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.396 [2024-11-06 13:50:19.354937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.354976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.354991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.396 [2024-11-06 13:50:19.355004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.396 [2024-11-06 13:50:19.355046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.355216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.355235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.396 [2024-11-06 13:50:19.355250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.396 [2024-11-06 13:50:19.355264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.355318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.355335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.396 [2024-11-06 13:50:19.355348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.396 [2024-11-06 13:50:19.355361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.355425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.355439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.396 [2024-11-06 13:50:19.355454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.396 [2024-11-06 13:50:19.355467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.396 [2024-11-06 13:50:19.355530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.396 [2024-11-06 13:50:19.355545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.396 [2024-11-06 13:50:19.355557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.397 [2024-11-06 13:50:19.355575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.397 [2024-11-06 13:50:19.355773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 576.657 ms, result 0 00:24:26.773 00:24:26.773 00:24:26.773 13:50:20 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76254 00:24:26.773 13:50:20 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:26.773 13:50:20 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76254 00:24:26.773 13:50:20 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76254 ']' 00:24:26.773 13:50:20 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.773 13:50:20 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.773 13:50:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.773 13:50:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:26.773 13:50:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:27.031 [2024-11-06 13:50:20.863508] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:24:27.032 [2024-11-06 13:50:20.863865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76254 ] 00:24:27.290 [2024-11-06 13:50:21.035423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.290 [2024-11-06 13:50:21.201809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.669 13:50:22 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:28.669 13:50:22 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:24:28.669 13:50:22 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:28.669 [2024-11-06 13:50:22.529895] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.669 [2024-11-06 13:50:22.530002] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.929 [2024-11-06 13:50:22.700547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.929 [2024-11-06 13:50:22.700628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:28.929 [2024-11-06 13:50:22.700652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:28.929 [2024-11-06 13:50:22.700666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.929 [2024-11-06 13:50:22.704604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.929 [2024-11-06 13:50:22.704654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.929 [2024-11-06 13:50:22.704673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.909 ms 00:24:28.929 [2024-11-06 13:50:22.704686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.929 [2024-11-06 13:50:22.704807] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:28.929 [2024-11-06 13:50:22.705890] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:28.929 [2024-11-06 13:50:22.705924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.929 [2024-11-06 13:50:22.705949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.929 [2024-11-06 13:50:22.705966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:24:28.929 [2024-11-06 13:50:22.705978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.929 [2024-11-06 13:50:22.708661] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:28.929 [2024-11-06 13:50:22.728692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.929 [2024-11-06 13:50:22.728772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:28.929 [2024-11-06 13:50:22.728792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.038 ms 00:24:28.929 [2024-11-06 13:50:22.728808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.929 [2024-11-06 13:50:22.728950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.929 [2024-11-06 13:50:22.728974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:28.929 [2024-11-06 13:50:22.728989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:28.929 [2024-11-06 13:50:22.729005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.929 [2024-11-06 13:50:22.742047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.929 [2024-11-06 13:50:22.742109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.929 [2024-11-06 13:50:22.742126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.941 ms 00:24:28.929 [2024-11-06 13:50:22.742147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.929 [2024-11-06 13:50:22.742356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.929 [2024-11-06 13:50:22.742380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.929 [2024-11-06 13:50:22.742394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:24:28.929 [2024-11-06 13:50:22.742410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.930 [2024-11-06 13:50:22.742455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.930 [2024-11-06 13:50:22.742474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:28.930 [2024-11-06 13:50:22.742488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:28.930 [2024-11-06 13:50:22.742504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.930 [2024-11-06 13:50:22.742542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:28.930 [2024-11-06 13:50:22.748581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.930 [2024-11-06 13:50:22.748856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.930 [2024-11-06 13:50:22.748887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.047 ms 00:24:28.930 [2024-11-06 13:50:22.748901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.930 [2024-11-06 13:50:22.748977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.930 [2024-11-06 13:50:22.748992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:28.930 [2024-11-06 13:50:22.749010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:28.930 [2024-11-06 13:50:22.749045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.930 [2024-11-06 13:50:22.749081] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:28.930 [2024-11-06 13:50:22.749127] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:28.930 [2024-11-06 13:50:22.749210] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:28.930 [2024-11-06 13:50:22.749236] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:28.930 [2024-11-06 13:50:22.749344] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:28.930 [2024-11-06 13:50:22.749361] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:28.930 [2024-11-06 13:50:22.749389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:28.930 [2024-11-06 13:50:22.749406] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:28.930 [2024-11-06 13:50:22.749425] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:28.930 [2024-11-06 13:50:22.749440] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:28.930 [2024-11-06 13:50:22.749457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:28.930 [2024-11-06 13:50:22.749472] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:28.930 [2024-11-06 13:50:22.749492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:28.930 [2024-11-06 13:50:22.749507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.930 [2024-11-06 13:50:22.749525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:28.930 [2024-11-06 13:50:22.749539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:24:28.930 [2024-11-06 13:50:22.749555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.930 [2024-11-06 13:50:22.749644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.930 [2024-11-06 13:50:22.749662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:28.930 [2024-11-06 13:50:22.749675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:28.930 [2024-11-06 13:50:22.749691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.930 [2024-11-06 13:50:22.749819] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:28.930 [2024-11-06 13:50:22.749843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:28.930 [2024-11-06 13:50:22.749857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.930 [2024-11-06 13:50:22.749900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.930 [2024-11-06 13:50:22.749914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:28.930 [2024-11-06 13:50:22.749930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:28.930 [2024-11-06 13:50:22.749943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:28.930 [2024-11-06 13:50:22.749963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:28.930 [2024-11-06 13:50:22.749976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:28.930 [2024-11-06 13:50:22.749992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.930 [2024-11-06 13:50:22.750004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:28.930 [2024-11-06 13:50:22.750033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:28.930 [2024-11-06 13:50:22.750046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.930 [2024-11-06 13:50:22.750062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:28.930 [2024-11-06 13:50:22.750074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:28.930 [2024-11-06 13:50:22.750090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:28.930 [2024-11-06 13:50:22.750117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:28.930 [2024-11-06 13:50:22.750130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:28.930 [2024-11-06 13:50:22.750169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.930 [2024-11-06 13:50:22.750198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:28.930 [2024-11-06 13:50:22.750218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.930 [2024-11-06 13:50:22.750245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:28.930 [2024-11-06 13:50:22.750257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.930 [2024-11-06 13:50:22.750296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:28.930 [2024-11-06 13:50:22.750311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.930 [2024-11-06 13:50:22.750347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:28.930 [2024-11-06 13:50:22.750359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.930 [2024-11-06 13:50:22.750386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:28.930 [2024-11-06 13:50:22.750401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:28.930 [2024-11-06 13:50:22.750413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.930 [2024-11-06 13:50:22.750428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:28.930 [2024-11-06 13:50:22.750440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:28.930 [2024-11-06 13:50:22.750459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:28.930 [2024-11-06 13:50:22.750485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:28.930 [2024-11-06 13:50:22.750497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750513] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:28.930 [2024-11-06 13:50:22.750530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:28.930 [2024-11-06 13:50:22.750546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.930 [2024-11-06 13:50:22.750558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.930 [2024-11-06 13:50:22.750573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:28.930 [2024-11-06 13:50:22.750585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:28.930 [2024-11-06 13:50:22.750600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:28.930 [2024-11-06 13:50:22.750612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:28.930 [2024-11-06 13:50:22.750627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:28.930 [2024-11-06 13:50:22.750639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:28.930 [2024-11-06 13:50:22.750657] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:28.930 [2024-11-06 13:50:22.750673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.930 [2024-11-06 13:50:22.750696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:28.930 [2024-11-06 13:50:22.750709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:28.930 [2024-11-06 13:50:22.750726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:28.930 [2024-11-06 13:50:22.750740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:28.930 [2024-11-06 13:50:22.750756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:28.930 [2024-11-06 13:50:22.750768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:28.930 [2024-11-06 13:50:22.750784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:28.930 [2024-11-06 13:50:22.750797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:28.930 [2024-11-06 13:50:22.750813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:28.930 [2024-11-06 13:50:22.750825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:28.930 [2024-11-06 13:50:22.750841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:28.930 [2024-11-06 13:50:22.750853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:28.931 [2024-11-06 13:50:22.750868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:28.931 [2024-11-06 13:50:22.750881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:28.931 [2024-11-06 13:50:22.750897] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:28.931 [2024-11-06 13:50:22.750912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.931 [2024-11-06 13:50:22.750932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:28.931 [2024-11-06 13:50:22.750944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:28.931 [2024-11-06 13:50:22.750960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:28.931 [2024-11-06 13:50:22.750973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:28.931 [2024-11-06 13:50:22.750990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.751004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:28.931 [2024-11-06 13:50:22.751031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:24:28.931 [2024-11-06 13:50:22.751044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.931 [2024-11-06 13:50:22.805661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.805717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.931 [2024-11-06 13:50:22.805740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.530 ms 00:24:28.931 [2024-11-06 13:50:22.805758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.931 [2024-11-06 13:50:22.805967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.805983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:28.931 [2024-11-06 13:50:22.806001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:28.931 [2024-11-06 13:50:22.806014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.931 [2024-11-06 13:50:22.864219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.864487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.931 [2024-11-06 13:50:22.864524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.146 ms 00:24:28.931 [2024-11-06 13:50:22.864539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.931 [2024-11-06 13:50:22.864678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.864693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.931 [2024-11-06 13:50:22.864712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:28.931 [2024-11-06 13:50:22.864725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.931 [2024-11-06 13:50:22.865585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.865611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.931 [2024-11-06 13:50:22.865634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:24:28.931 [2024-11-06 13:50:22.865648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.931 [2024-11-06 13:50:22.865801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.865819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.931 [2024-11-06 13:50:22.865836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:24:28.931 [2024-11-06 13:50:22.865850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.931 [2024-11-06 13:50:22.894537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.931 [2024-11-06 13:50:22.894591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.931 [2024-11-06 13:50:22.894614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.648 ms 00:24:28.931 [2024-11-06 13:50:22.894628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:22.932127] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:29.191 [2024-11-06 13:50:22.932207] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:29.191 [2024-11-06 13:50:22.932237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:22.932251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:29.191 [2024-11-06 13:50:22.932275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.412 ms 00:24:29.191 [2024-11-06 13:50:22.932288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:22.963608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:22.963662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:29.191 [2024-11-06 13:50:22.963690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.190 ms 00:24:29.191 [2024-11-06 13:50:22.963704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:22.982600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:22.982646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:29.191 [2024-11-06 13:50:22.982671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.778 ms 00:24:29.191 [2024-11-06 13:50:22.982684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:23.001682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:23.001936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:29.191 [2024-11-06 13:50:23.001968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.899 ms 00:24:29.191 [2024-11-06 13:50:23.001981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:23.002896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:23.002928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:29.191 [2024-11-06 13:50:23.002947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:24:29.191 [2024-11-06 13:50:23.002960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:23.104475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:23.104581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:29.191 [2024-11-06 13:50:23.104609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.470 ms 00:24:29.191 [2024-11-06 13:50:23.104624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:23.117324] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:29.191 [2024-11-06 13:50:23.145843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:23.145941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:29.191 [2024-11-06 13:50:23.145967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.016 ms 00:24:29.191 [2024-11-06 13:50:23.145985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.191 [2024-11-06 13:50:23.146231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.191 [2024-11-06 13:50:23.146253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:29.191 [2024-11-06 13:50:23.146268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:29.191 [2024-11-06 13:50:23.146286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.192 [2024-11-06 13:50:23.146400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.192 [2024-11-06 13:50:23.146426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:29.192 [2024-11-06 13:50:23.146440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:29.192 [2024-11-06 13:50:23.146463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.192 [2024-11-06 13:50:23.146498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.192 [2024-11-06 13:50:23.146515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:29.192 [2024-11-06 13:50:23.146528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:29.192 [2024-11-06 13:50:23.146549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.192 [2024-11-06 13:50:23.146597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:29.192 [2024-11-06 13:50:23.146620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.192 [2024-11-06 13:50:23.146633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:29.192 [2024-11-06 13:50:23.146655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:29.192 [2024-11-06 13:50:23.146667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.452 [2024-11-06 13:50:23.186142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.452 [2024-11-06 13:50:23.186202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:29.452 [2024-11-06 13:50:23.186228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.430 ms 00:24:29.452 [2024-11-06 13:50:23.186242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.452 [2024-11-06 13:50:23.186406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.452 [2024-11-06 13:50:23.186423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:29.452 [2024-11-06 13:50:23.186441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:29.452 [2024-11-06 13:50:23.186460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.452 [2024-11-06 13:50:23.187889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:29.452 [2024-11-06 13:50:23.192766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 486.980 ms, result 0 00:24:29.452 [2024-11-06 13:50:23.194309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:29.452 Some configs were skipped because the RPC state that can call them passed over. 00:24:29.452 13:50:23 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:29.711 [2024-11-06 13:50:23.434950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.711 [2024-11-06 13:50:23.435041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:29.711 [2024-11-06 13:50:23.435062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:24:29.711 [2024-11-06 13:50:23.435080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.711 [2024-11-06 13:50:23.435124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.908 ms, result 0 00:24:29.711 true 00:24:29.711 13:50:23 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:29.711 [2024-11-06 13:50:23.686875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.711 [2024-11-06 13:50:23.686948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:29.711 [2024-11-06 13:50:23.686972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:24:29.711 [2024-11-06 13:50:23.686986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.711 [2024-11-06 13:50:23.687056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.506 ms, result 0 00:24:29.711 true 00:24:29.970 13:50:23 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76254 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76254 ']' 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76254 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76254 00:24:29.970 killing process with pid 76254 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76254' 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76254 00:24:29.970 13:50:23 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76254 00:24:31.350 [2024-11-06 13:50:25.040629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-11-06 13:50:25.040728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:31.350 [2024-11-06 13:50:25.040750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:31.350 [2024-11-06 13:50:25.040768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-11-06 13:50:25.040807] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:31.350 [2024-11-06 13:50:25.046289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-11-06 13:50:25.046332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:31.350 [2024-11-06 13:50:25.046363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.450 ms 00:24:31.350 [2024-11-06 13:50:25.046378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-11-06 13:50:25.046718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-11-06 13:50:25.046744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:31.350 [2024-11-06 13:50:25.046763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:24:31.350 [2024-11-06 13:50:25.046777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.050620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.050664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:31.351 [2024-11-06 13:50:25.050689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:24:31.351 [2024-11-06 13:50:25.050704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.056915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.056959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:31.351 [2024-11-06 13:50:25.056982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:24:31.351 [2024-11-06 13:50:25.056997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.074284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.074330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:31.351 [2024-11-06 13:50:25.074364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.178 ms 00:24:31.351 [2024-11-06 13:50:25.074392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.086761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.086812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:31.351 [2024-11-06 13:50:25.086836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.294 ms 00:24:31.351 [2024-11-06 13:50:25.086851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.087015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.087055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:31.351 [2024-11-06 13:50:25.087075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:31.351 [2024-11-06 13:50:25.087090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.104492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.104538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:31.351 [2024-11-06 13:50:25.104561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.366 ms 00:24:31.351 [2024-11-06 13:50:25.104584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.121425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.121471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:31.351 [2024-11-06 13:50:25.121498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.748 ms 00:24:31.351 [2024-11-06 13:50:25.121512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.137632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.137679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:31.351 [2024-11-06 13:50:25.137711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.042 ms 00:24:31.351 [2024-11-06 13:50:25.137724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.154242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.351 [2024-11-06 13:50:25.154284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:31.351 [2024-11-06 13:50:25.154304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.360 ms 00:24:31.351 [2024-11-06 13:50:25.154316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.351 [2024-11-06 13:50:25.154393] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:31.351 [2024-11-06 13:50:25.154416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.154996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:31.351 [2024-11-06 13:50:25.155356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.155986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:31.352 [2024-11-06 13:50:25.156202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:31.352 [2024-11-06 13:50:25.156230] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03c79bca-64f5-43d2-9303-3144f18687a3 00:24:31.352 [2024-11-06 13:50:25.156263] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:31.352 [2024-11-06 13:50:25.156289] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:31.352 [2024-11-06 13:50:25.156304] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:31.352 [2024-11-06 13:50:25.156323] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:31.352 [2024-11-06 13:50:25.156338] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:31.352 [2024-11-06 13:50:25.156357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:31.352 [2024-11-06 13:50:25.156371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:31.352 [2024-11-06 13:50:25.156388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:31.352 [2024-11-06 13:50:25.156402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:31.352 [2024-11-06 13:50:25.156421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.352 [2024-11-06 13:50:25.156447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:31.352 [2024-11-06 13:50:25.156467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:24:31.352 [2024-11-06 13:50:25.156481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.352 [2024-11-06 13:50:25.180300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.352 [2024-11-06 13:50:25.180344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:31.352 [2024-11-06 13:50:25.180371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.775 ms 00:24:31.352 [2024-11-06 13:50:25.180388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.352 [2024-11-06 13:50:25.181112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.352 [2024-11-06 13:50:25.181138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:31.352 [2024-11-06 13:50:25.181158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:24:31.352 [2024-11-06 13:50:25.181176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.352 [2024-11-06 13:50:25.263811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.352 [2024-11-06 13:50:25.263869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.352 [2024-11-06 13:50:25.263891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.352 [2024-11-06 13:50:25.263906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.352 [2024-11-06 13:50:25.264109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.352 [2024-11-06 13:50:25.264128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.352 [2024-11-06 13:50:25.264146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.352 [2024-11-06 13:50:25.264165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.352 [2024-11-06 13:50:25.264252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.352 [2024-11-06 13:50:25.264270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.352 [2024-11-06 13:50:25.264292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.352 [2024-11-06 13:50:25.264306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.352 [2024-11-06 13:50:25.264338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.352 [2024-11-06 13:50:25.264354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.352 [2024-11-06 13:50:25.264372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.352 [2024-11-06 13:50:25.264386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.415094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.415182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.612 [2024-11-06 13:50:25.415209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.415225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.533708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.533783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.612 [2024-11-06 13:50:25.533808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.533827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.533995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.534011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.612 [2024-11-06 13:50:25.534049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.534063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.534105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.534136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.612 [2024-11-06 13:50:25.534155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.534169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.534383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.534400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.612 [2024-11-06 13:50:25.534420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.534434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.534504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.534520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:31.612 [2024-11-06 13:50:25.534538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.534552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.534618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.534633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.612 [2024-11-06 13:50:25.534656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.534670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.534738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.612 [2024-11-06 13:50:25.534761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.612 [2024-11-06 13:50:25.534780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.612 [2024-11-06 13:50:25.534794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.612 [2024-11-06 13:50:25.534998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.333 ms, result 0 00:24:32.990 13:50:26 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:32.990 13:50:26 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.990 [2024-11-06 13:50:26.905487] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:24:32.990 [2024-11-06 13:50:26.905656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76333 ] 00:24:33.249 [2024-11-06 13:50:27.084233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.508 [2024-11-06 13:50:27.239871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.768 [2024-11-06 13:50:27.720606] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.768 [2024-11-06 13:50:27.720707] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.028 [2024-11-06 13:50:27.893931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.894043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:34.028 [2024-11-06 13:50:27.894066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:34.028 [2024-11-06 13:50:27.894080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.898171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.898217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:34.028 [2024-11-06 13:50:27.898235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:24:34.028 [2024-11-06 13:50:27.898249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.898397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:34.028 [2024-11-06 13:50:27.899583] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:34.028 [2024-11-06 13:50:27.899620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.899635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:34.028 [2024-11-06 13:50:27.899650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:24:34.028 [2024-11-06 13:50:27.899664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.902575] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:34.028 [2024-11-06 13:50:27.925807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.925862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:34.028 [2024-11-06 13:50:27.925881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.232 ms 00:24:34.028 [2024-11-06 13:50:27.925894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.926033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.926053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:34.028 [2024-11-06 13:50:27.926067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:34.028 [2024-11-06 13:50:27.926081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.939673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.939713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:34.028 [2024-11-06 13:50:27.939731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.534 ms 00:24:34.028 [2024-11-06 13:50:27.939745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.939921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.939940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:34.028 [2024-11-06 13:50:27.939955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:34.028 [2024-11-06 13:50:27.939968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.940009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.940042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:34.028 [2024-11-06 13:50:27.940057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:34.028 [2024-11-06 13:50:27.940069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.940109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:34.028 [2024-11-06 13:50:27.946791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.946832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:34.028 [2024-11-06 13:50:27.946850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:24:34.028 [2024-11-06 13:50:27.946863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.946933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.028 [2024-11-06 13:50:27.946950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:34.028 [2024-11-06 13:50:27.946965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:34.028 [2024-11-06 13:50:27.946978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.028 [2024-11-06 13:50:27.947014] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:34.028 [2024-11-06 13:50:27.947069] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:34.028 [2024-11-06 13:50:27.947115] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:34.028 [2024-11-06 13:50:27.947140] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:34.028 [2024-11-06 13:50:27.947248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:34.029 [2024-11-06 13:50:27.947267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:34.029 [2024-11-06 13:50:27.947285] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:34.029 [2024-11-06 13:50:27.947301] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:34.029 [2024-11-06 13:50:27.947324] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:34.029 [2024-11-06 13:50:27.947339] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:34.029 [2024-11-06 13:50:27.947353] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:34.029 [2024-11-06 13:50:27.947367] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:34.029 [2024-11-06 13:50:27.947380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:34.029 [2024-11-06 13:50:27.947394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.029 [2024-11-06 13:50:27.947408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:34.029 [2024-11-06 13:50:27.947434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:24:34.029 [2024-11-06 13:50:27.947447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.029 [2024-11-06 13:50:27.947532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.029 [2024-11-06 13:50:27.947552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:34.029 [2024-11-06 13:50:27.947565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:34.029 [2024-11-06 13:50:27.947577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.029 [2024-11-06 13:50:27.947679] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:34.029 [2024-11-06 13:50:27.947694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:34.029 [2024-11-06 13:50:27.947708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.029 [2024-11-06 13:50:27.947721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.029 [2024-11-06 13:50:27.947735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:34.029 [2024-11-06 13:50:27.947747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:34.029 [2024-11-06 13:50:27.947760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:34.029 [2024-11-06 13:50:27.947772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:34.029 [2024-11-06 13:50:27.947785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:34.029 [2024-11-06 13:50:27.947797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.029 [2024-11-06 13:50:27.947809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:34.029 [2024-11-06 13:50:27.947821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:34.029 [2024-11-06 13:50:27.947832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.029 [2024-11-06 13:50:27.947860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:34.029 [2024-11-06 13:50:27.947872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:34.029 [2024-11-06 13:50:27.947884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.029 [2024-11-06 13:50:27.947896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:34.029 [2024-11-06 13:50:27.947908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:34.029 [2024-11-06 13:50:27.947919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.029 [2024-11-06 13:50:27.947931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:34.029 [2024-11-06 13:50:27.947943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:34.029 [2024-11-06 13:50:27.947954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.029 [2024-11-06 13:50:27.947966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:34.029 [2024-11-06 13:50:27.947977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:34.029 [2024-11-06 13:50:27.947990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.029 [2024-11-06 13:50:27.948018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:34.029 [2024-11-06 13:50:27.948031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:34.029 [2024-11-06 13:50:27.948058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.029 [2024-11-06 13:50:27.948072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:34.029 [2024-11-06 13:50:27.948085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:34.029 [2024-11-06 13:50:27.948098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.029 [2024-11-06 13:50:27.948112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:34.029 [2024-11-06 13:50:27.948125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:34.029 [2024-11-06 13:50:27.948138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.029 [2024-11-06 13:50:27.948150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:34.029 [2024-11-06 13:50:27.948163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:34.029 [2024-11-06 13:50:27.948176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.029 [2024-11-06 13:50:27.948189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:34.029 [2024-11-06 13:50:27.948204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:34.029 [2024-11-06 13:50:27.948218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.029 [2024-11-06 13:50:27.948230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:34.029 [2024-11-06 13:50:27.948243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:34.029 [2024-11-06 13:50:27.948255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.029 [2024-11-06 13:50:27.948268] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:34.029 [2024-11-06 13:50:27.948281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:34.029 [2024-11-06 13:50:27.948295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.029 [2024-11-06 13:50:27.948314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.029 [2024-11-06 13:50:27.948328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:34.029 [2024-11-06 13:50:27.948340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:34.029 [2024-11-06 13:50:27.948353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:34.029 [2024-11-06 13:50:27.948366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:34.029 [2024-11-06 13:50:27.948378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:34.029 [2024-11-06 13:50:27.948391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:34.029 [2024-11-06 13:50:27.948405] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:34.029 [2024-11-06 13:50:27.948422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.029 [2024-11-06 13:50:27.948437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:34.029 [2024-11-06 13:50:27.948451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:34.029 [2024-11-06 13:50:27.948467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:34.029 [2024-11-06 13:50:27.948480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:34.029 [2024-11-06 13:50:27.948495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:34.029 [2024-11-06 13:50:27.948508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:34.029 [2024-11-06 13:50:27.948522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:34.029 [2024-11-06 13:50:27.948536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:34.029 [2024-11-06 13:50:27.948550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:34.029 [2024-11-06 13:50:27.948564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:34.029 [2024-11-06 13:50:27.948577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:34.029 [2024-11-06 13:50:27.948591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:34.029 [2024-11-06 13:50:27.948604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:34.029 [2024-11-06 13:50:27.948618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:34.029 [2024-11-06 13:50:27.948631] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:34.029 [2024-11-06 13:50:27.948647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.029 [2024-11-06 13:50:27.948668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:34.029 [2024-11-06 13:50:27.948682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:34.029 [2024-11-06 13:50:27.948697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:34.030 [2024-11-06 13:50:27.948711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:34.030 [2024-11-06 13:50:27.948726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.030 [2024-11-06 13:50:27.948741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:34.030 [2024-11-06 13:50:27.948761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:24:34.030 [2024-11-06 13:50:27.948774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.030 [2024-11-06 13:50:28.005377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.030 [2024-11-06 13:50:28.005457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:34.030 [2024-11-06 13:50:28.005480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.524 ms 00:24:34.030 [2024-11-06 13:50:28.005496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.030 [2024-11-06 13:50:28.005754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.030 [2024-11-06 13:50:28.005782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:34.030 [2024-11-06 13:50:28.005799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:34.030 [2024-11-06 13:50:28.005814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.083402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.083467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:34.289 [2024-11-06 13:50:28.083495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.547 ms 00:24:34.289 [2024-11-06 13:50:28.083510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.083675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.083692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:34.289 [2024-11-06 13:50:28.083708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:34.289 [2024-11-06 13:50:28.083721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.084537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.084564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:34.289 [2024-11-06 13:50:28.084580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:24:34.289 [2024-11-06 13:50:28.084603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.084759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.084785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:34.289 [2024-11-06 13:50:28.084800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:24:34.289 [2024-11-06 13:50:28.084814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.111276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.111337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:34.289 [2024-11-06 13:50:28.111357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.426 ms 00:24:34.289 [2024-11-06 13:50:28.111374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.133732] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:34.289 [2024-11-06 13:50:28.133801] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:34.289 [2024-11-06 13:50:28.133823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.133838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:34.289 [2024-11-06 13:50:28.133854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.212 ms 00:24:34.289 [2024-11-06 13:50:28.133867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.167202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.167270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:34.289 [2024-11-06 13:50:28.167289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.221 ms 00:24:34.289 [2024-11-06 13:50:28.167303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.188075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.188119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:34.289 [2024-11-06 13:50:28.188136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.680 ms 00:24:34.289 [2024-11-06 13:50:28.188149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.208271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.208315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:34.289 [2024-11-06 13:50:28.208333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.025 ms 00:24:34.289 [2024-11-06 13:50:28.208346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.289 [2024-11-06 13:50:28.209275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.289 [2024-11-06 13:50:28.209308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:34.289 [2024-11-06 13:50:28.209323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:24:34.289 [2024-11-06 13:50:28.209337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.316986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.317089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:34.549 [2024-11-06 13:50:28.317114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.607 ms 00:24:34.549 [2024-11-06 13:50:28.317130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.329726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:34.549 [2024-11-06 13:50:28.358157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.358247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:34.549 [2024-11-06 13:50:28.358271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.834 ms 00:24:34.549 [2024-11-06 13:50:28.358298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.358526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.358546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:34.549 [2024-11-06 13:50:28.358562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:34.549 [2024-11-06 13:50:28.358575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.358657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.358675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:34.549 [2024-11-06 13:50:28.358691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:34.549 [2024-11-06 13:50:28.358704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.358767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.358786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:34.549 [2024-11-06 13:50:28.358800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:34.549 [2024-11-06 13:50:28.358814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.358867] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:34.549 [2024-11-06 13:50:28.358884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.358898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:34.549 [2024-11-06 13:50:28.358912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:34.549 [2024-11-06 13:50:28.358926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.400827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.400878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:34.549 [2024-11-06 13:50:28.400896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.870 ms 00:24:34.549 [2024-11-06 13:50:28.400917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.401080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.549 [2024-11-06 13:50:28.401098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:34.549 [2024-11-06 13:50:28.401113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:34.549 [2024-11-06 13:50:28.401126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.549 [2024-11-06 13:50:28.402544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:34.549 [2024-11-06 13:50:28.407491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 508.243 ms, result 0 00:24:34.549 [2024-11-06 13:50:28.408529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:34.549 [2024-11-06 13:50:28.428975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:35.485  [2024-11-06T13:50:30.864Z] Copying: 30/256 [MB] (30 MBps) [2024-11-06T13:50:31.450Z] Copying: 57/256 [MB] (27 MBps) [2024-11-06T13:50:32.827Z] Copying: 86/256 [MB] (28 MBps) [2024-11-06T13:50:33.765Z] Copying: 114/256 [MB] (28 MBps) [2024-11-06T13:50:34.702Z] Copying: 142/256 [MB] (27 MBps) [2024-11-06T13:50:35.638Z] Copying: 169/256 [MB] (27 MBps) [2024-11-06T13:50:36.575Z] Copying: 195/256 [MB] (25 MBps) [2024-11-06T13:50:37.510Z] Copying: 225/256 [MB] (30 MBps) [2024-11-06T13:50:37.510Z] Copying: 254/256 [MB] (29 MBps) [2024-11-06T13:50:37.510Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-06 13:50:37.476477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:43.527 [2024-11-06 13:50:37.492507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-11-06 13:50:37.492561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:43.527 [2024-11-06 13:50:37.492581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:43.527 [2024-11-06 13:50:37.492607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-11-06 13:50:37.492636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:43.527 [2024-11-06 13:50:37.497555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-11-06 13:50:37.497590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:43.527 [2024-11-06 13:50:37.497606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.900 ms 00:24:43.527 [2024-11-06 13:50:37.497618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-11-06 13:50:37.497889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-11-06 13:50:37.497915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:43.527 [2024-11-06 13:50:37.497927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:24:43.527 [2024-11-06 13:50:37.497939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-11-06 13:50:37.500932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-11-06 13:50:37.500965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:43.527 [2024-11-06 13:50:37.500978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:24:43.527 [2024-11-06 13:50:37.500988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.527 [2024-11-06 13:50:37.506832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.527 [2024-11-06 13:50:37.506865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:43.527 [2024-11-06 13:50:37.506878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.821 ms 00:24:43.527 [2024-11-06 13:50:37.506889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.548257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.787 [2024-11-06 13:50:37.548315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:43.787 [2024-11-06 13:50:37.548333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.283 ms 00:24:43.787 [2024-11-06 13:50:37.548344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.571273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.787 [2024-11-06 13:50:37.571335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:43.787 [2024-11-06 13:50:37.571359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.853 ms 00:24:43.787 [2024-11-06 13:50:37.571371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.571555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.787 [2024-11-06 13:50:37.571570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:43.787 [2024-11-06 13:50:37.571583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:43.787 [2024-11-06 13:50:37.571605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.610670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.787 [2024-11-06 13:50:37.610722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:43.787 [2024-11-06 13:50:37.610739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.028 ms 00:24:43.787 [2024-11-06 13:50:37.610751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.648580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.787 [2024-11-06 13:50:37.648645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:43.787 [2024-11-06 13:50:37.648667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.758 ms 00:24:43.787 [2024-11-06 13:50:37.648678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.686622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.787 [2024-11-06 13:50:37.686676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:43.787 [2024-11-06 13:50:37.686693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.880 ms 00:24:43.787 [2024-11-06 13:50:37.686704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.725264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.787 [2024-11-06 13:50:37.725339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:43.787 [2024-11-06 13:50:37.725357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.441 ms 00:24:43.787 [2024-11-06 13:50:37.725369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.787 [2024-11-06 13:50:37.725437] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:43.787 [2024-11-06 13:50:37.725458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:43.787 [2024-11-06 13:50:37.725473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:43.787 [2024-11-06 13:50:37.725485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:43.787 [2024-11-06 13:50:37.725497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:43.787 [2024-11-06 13:50:37.725509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:43.787 [2024-11-06 13:50:37.725521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:43.787 [2024-11-06 13:50:37.725533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.725998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:43.788 [2024-11-06 13:50:37.726577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:43.789 [2024-11-06 13:50:37.726589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:43.789 [2024-11-06 13:50:37.726600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:43.789 [2024-11-06 13:50:37.726611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:43.789 [2024-11-06 13:50:37.726631] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:43.789 [2024-11-06 13:50:37.726643] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03c79bca-64f5-43d2-9303-3144f18687a3 00:24:43.789 [2024-11-06 13:50:37.726655] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:43.789 [2024-11-06 13:50:37.726667] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:43.789 [2024-11-06 13:50:37.726678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:43.789 [2024-11-06 13:50:37.726689] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:43.789 [2024-11-06 13:50:37.726700] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:43.789 [2024-11-06 13:50:37.726712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:43.789 [2024-11-06 13:50:37.726722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:43.789 [2024-11-06 13:50:37.726732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:43.789 [2024-11-06 13:50:37.726741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:43.789 [2024-11-06 13:50:37.726752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.789 [2024-11-06 13:50:37.726773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:43.789 [2024-11-06 13:50:37.726784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:24:43.789 [2024-11-06 13:50:37.726795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.789 [2024-11-06 13:50:37.748752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.789 [2024-11-06 13:50:37.748808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:43.789 [2024-11-06 13:50:37.748825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.930 ms 00:24:43.789 [2024-11-06 13:50:37.748836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.789 [2024-11-06 13:50:37.749517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.789 [2024-11-06 13:50:37.749539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:43.789 [2024-11-06 13:50:37.749551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:24:43.789 [2024-11-06 13:50:37.749562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.048 [2024-11-06 13:50:37.811233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.048 [2024-11-06 13:50:37.811313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.048 [2024-11-06 13:50:37.811330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.048 [2024-11-06 13:50:37.811342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.048 [2024-11-06 13:50:37.811534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.048 [2024-11-06 13:50:37.811548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.048 [2024-11-06 13:50:37.811560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.048 [2024-11-06 13:50:37.811571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.048 [2024-11-06 13:50:37.811638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.048 [2024-11-06 13:50:37.811669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.048 [2024-11-06 13:50:37.811681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.048 [2024-11-06 13:50:37.811699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.048 [2024-11-06 13:50:37.811721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.048 [2024-11-06 13:50:37.811742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.048 [2024-11-06 13:50:37.811753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.048 [2024-11-06 13:50:37.811763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.048 [2024-11-06 13:50:37.949847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.048 [2024-11-06 13:50:37.949944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.048 [2024-11-06 13:50:37.949963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.048 [2024-11-06 13:50:37.949975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.058431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.308 [2024-11-06 13:50:38.058526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:44.308 [2024-11-06 13:50:38.058545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.308 [2024-11-06 13:50:38.058557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.058696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.308 [2024-11-06 13:50:38.058709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.308 [2024-11-06 13:50:38.058721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.308 [2024-11-06 13:50:38.058732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.058764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.308 [2024-11-06 13:50:38.058777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.308 [2024-11-06 13:50:38.058793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.308 [2024-11-06 13:50:38.058804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.058948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.308 [2024-11-06 13:50:38.058962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.308 [2024-11-06 13:50:38.058975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.308 [2024-11-06 13:50:38.058986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.059027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.308 [2024-11-06 13:50:38.059060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:44.308 [2024-11-06 13:50:38.059072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.308 [2024-11-06 13:50:38.059087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.059140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.308 [2024-11-06 13:50:38.059152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.308 [2024-11-06 13:50:38.059163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.308 [2024-11-06 13:50:38.059174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.059231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.308 [2024-11-06 13:50:38.059244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.308 [2024-11-06 13:50:38.059259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.308 [2024-11-06 13:50:38.059270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.308 [2024-11-06 13:50:38.059452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 566.929 ms, result 0 00:24:45.247 00:24:45.247 00:24:45.247 13:50:39 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:45.508 13:50:39 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:46.077 13:50:39 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:46.077 [2024-11-06 13:50:39.878669] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:24:46.077 [2024-11-06 13:50:39.878862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76467 ] 00:24:46.337 [2024-11-06 13:50:40.062061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.337 [2024-11-06 13:50:40.204895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.907 [2024-11-06 13:50:40.629820] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.907 [2024-11-06 13:50:40.629901] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.907 [2024-11-06 13:50:40.797222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.797287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:46.907 [2024-11-06 13:50:40.797304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:46.907 [2024-11-06 13:50:40.797315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.800770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.800808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.907 [2024-11-06 13:50:40.800820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.435 ms 00:24:46.907 [2024-11-06 13:50:40.800831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.800946] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:46.907 [2024-11-06 13:50:40.801962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:46.907 [2024-11-06 13:50:40.801991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.802004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.907 [2024-11-06 13:50:40.802015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:24:46.907 [2024-11-06 13:50:40.802041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.804603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:46.907 [2024-11-06 13:50:40.824297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.824361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:46.907 [2024-11-06 13:50:40.824377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.695 ms 00:24:46.907 [2024-11-06 13:50:40.824388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.824488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.824519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:46.907 [2024-11-06 13:50:40.824531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:46.907 [2024-11-06 13:50:40.824542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.837152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.837181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.907 [2024-11-06 13:50:40.837193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.566 ms 00:24:46.907 [2024-11-06 13:50:40.837203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.837324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.837340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.907 [2024-11-06 13:50:40.837352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:46.907 [2024-11-06 13:50:40.837361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.837391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.837407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:46.907 [2024-11-06 13:50:40.837418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:46.907 [2024-11-06 13:50:40.837427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.837453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:46.907 [2024-11-06 13:50:40.843290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.843324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.907 [2024-11-06 13:50:40.843336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.845 ms 00:24:46.907 [2024-11-06 13:50:40.843347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.843398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.843411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:46.907 [2024-11-06 13:50:40.843422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:46.907 [2024-11-06 13:50:40.843434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.843455] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:46.907 [2024-11-06 13:50:40.843486] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:46.907 [2024-11-06 13:50:40.843534] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:46.907 [2024-11-06 13:50:40.843552] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:46.907 [2024-11-06 13:50:40.843658] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:46.907 [2024-11-06 13:50:40.843672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:46.907 [2024-11-06 13:50:40.843686] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:46.907 [2024-11-06 13:50:40.843699] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:46.907 [2024-11-06 13:50:40.843716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:46.907 [2024-11-06 13:50:40.843728] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:46.907 [2024-11-06 13:50:40.843739] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:46.907 [2024-11-06 13:50:40.843749] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:46.907 [2024-11-06 13:50:40.843759] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:46.907 [2024-11-06 13:50:40.843770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.843781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:46.907 [2024-11-06 13:50:40.843791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:46.907 [2024-11-06 13:50:40.843802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.907 [2024-11-06 13:50:40.843879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.907 [2024-11-06 13:50:40.843894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:46.907 [2024-11-06 13:50:40.843905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:46.908 [2024-11-06 13:50:40.843915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.908 [2024-11-06 13:50:40.844005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:46.908 [2024-11-06 13:50:40.844041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:46.908 [2024-11-06 13:50:40.844053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:46.908 [2024-11-06 13:50:40.844084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:46.908 [2024-11-06 13:50:40.844115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.908 [2024-11-06 13:50:40.844134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:46.908 [2024-11-06 13:50:40.844144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:46.908 [2024-11-06 13:50:40.844153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.908 [2024-11-06 13:50:40.844175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:46.908 [2024-11-06 13:50:40.844185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:46.908 [2024-11-06 13:50:40.844194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:46.908 [2024-11-06 13:50:40.844213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:46.908 [2024-11-06 13:50:40.844242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:46.908 [2024-11-06 13:50:40.844270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:46.908 [2024-11-06 13:50:40.844296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:46.908 [2024-11-06 13:50:40.844324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:46.908 [2024-11-06 13:50:40.844351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.908 [2024-11-06 13:50:40.844369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:46.908 [2024-11-06 13:50:40.844378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:46.908 [2024-11-06 13:50:40.844387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.908 [2024-11-06 13:50:40.844396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:46.908 [2024-11-06 13:50:40.844406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:46.908 [2024-11-06 13:50:40.844415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:46.908 [2024-11-06 13:50:40.844433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:46.908 [2024-11-06 13:50:40.844441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844450] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:46.908 [2024-11-06 13:50:40.844460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:46.908 [2024-11-06 13:50:40.844470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.908 [2024-11-06 13:50:40.844495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:46.908 [2024-11-06 13:50:40.844505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:46.908 [2024-11-06 13:50:40.844514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:46.908 [2024-11-06 13:50:40.844523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:46.908 [2024-11-06 13:50:40.844533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:46.908 [2024-11-06 13:50:40.844542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:46.908 [2024-11-06 13:50:40.844553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:46.908 [2024-11-06 13:50:40.844566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.908 [2024-11-06 13:50:40.844577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:46.908 [2024-11-06 13:50:40.844588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:46.908 [2024-11-06 13:50:40.844598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:46.908 [2024-11-06 13:50:40.844608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:46.908 [2024-11-06 13:50:40.844619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:46.908 [2024-11-06 13:50:40.844628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:46.908 [2024-11-06 13:50:40.844639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:46.908 [2024-11-06 13:50:40.844649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:46.908 [2024-11-06 13:50:40.844659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:46.908 [2024-11-06 13:50:40.844670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:46.908 [2024-11-06 13:50:40.844680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:46.908 [2024-11-06 13:50:40.844690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:46.908 [2024-11-06 13:50:40.844700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:46.908 [2024-11-06 13:50:40.844711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:46.908 [2024-11-06 13:50:40.844722] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:46.908 [2024-11-06 13:50:40.844734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.908 [2024-11-06 13:50:40.844745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:46.908 [2024-11-06 13:50:40.844755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:46.908 [2024-11-06 13:50:40.844766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:46.908 [2024-11-06 13:50:40.844776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:46.908 [2024-11-06 13:50:40.844786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-06 13:50:40.844796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:46.908 [2024-11-06 13:50:40.844812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:24:46.908 [2024-11-06 13:50:40.844822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:40.894991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:40.895042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.169 [2024-11-06 13:50:40.895058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.109 ms 00:24:47.169 [2024-11-06 13:50:40.895069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:40.895239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:40.895255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:47.169 [2024-11-06 13:50:40.895266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:47.169 [2024-11-06 13:50:40.895277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:40.961947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:40.961988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.169 [2024-11-06 13:50:40.962023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.643 ms 00:24:47.169 [2024-11-06 13:50:40.962044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:40.962137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:40.962151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.169 [2024-11-06 13:50:40.962164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:47.169 [2024-11-06 13:50:40.962174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:40.962989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:40.963011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.169 [2024-11-06 13:50:40.963032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:24:47.169 [2024-11-06 13:50:40.963051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:40.963186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:40.963200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.169 [2024-11-06 13:50:40.963212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:47.169 [2024-11-06 13:50:40.963223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:40.987898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:40.987937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.169 [2024-11-06 13:50:40.987950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.649 ms 00:24:47.169 [2024-11-06 13:50:40.987962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:41.008490] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:47.169 [2024-11-06 13:50:41.008528] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:47.169 [2024-11-06 13:50:41.008543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:41.008555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:47.169 [2024-11-06 13:50:41.008566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.412 ms 00:24:47.169 [2024-11-06 13:50:41.008576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:41.038337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:41.038412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:47.169 [2024-11-06 13:50:41.038427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.658 ms 00:24:47.169 [2024-11-06 13:50:41.038439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:41.056949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:41.056986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:47.169 [2024-11-06 13:50:41.056999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.428 ms 00:24:47.169 [2024-11-06 13:50:41.057008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:41.074588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:41.074625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:47.169 [2024-11-06 13:50:41.074637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.495 ms 00:24:47.169 [2024-11-06 13:50:41.074648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.169 [2024-11-06 13:50:41.075506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.169 [2024-11-06 13:50:41.075536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:47.169 [2024-11-06 13:50:41.075562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:24:47.169 [2024-11-06 13:50:41.075573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.173409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.173484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:47.429 [2024-11-06 13:50:41.173502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.802 ms 00:24:47.429 [2024-11-06 13:50:41.173515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.184647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:47.429 [2024-11-06 13:50:41.211967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.212045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:47.429 [2024-11-06 13:50:41.212065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.310 ms 00:24:47.429 [2024-11-06 13:50:41.212083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.212241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.212256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:47.429 [2024-11-06 13:50:41.212268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:47.429 [2024-11-06 13:50:41.212279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.212370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.212383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:47.429 [2024-11-06 13:50:41.212395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:47.429 [2024-11-06 13:50:41.212406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.212459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.212474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:47.429 [2024-11-06 13:50:41.212486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:47.429 [2024-11-06 13:50:41.212496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.212542] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:47.429 [2024-11-06 13:50:41.212555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.212566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:47.429 [2024-11-06 13:50:41.212578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:47.429 [2024-11-06 13:50:41.212588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.251674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.251725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:47.429 [2024-11-06 13:50:41.251742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.061 ms 00:24:47.429 [2024-11-06 13:50:41.251753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.251883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.429 [2024-11-06 13:50:41.251899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:47.429 [2024-11-06 13:50:41.251911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:47.429 [2024-11-06 13:50:41.251921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.429 [2024-11-06 13:50:41.253274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.429 [2024-11-06 13:50:41.257793] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 455.691 ms, result 0 00:24:47.429 [2024-11-06 13:50:41.258781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.429 [2024-11-06 13:50:41.278837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.689  [2024-11-06T13:50:41.672Z] Copying: 4096/4096 [kB] (average 29 MBps)[2024-11-06 13:50:41.418927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.689 [2024-11-06 13:50:41.433795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.689 [2024-11-06 13:50:41.433832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.689 [2024-11-06 13:50:41.433848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:47.689 [2024-11-06 13:50:41.433865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.689 [2024-11-06 13:50:41.433888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:47.689 [2024-11-06 13:50:41.438481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.689 [2024-11-06 13:50:41.438514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.689 [2024-11-06 13:50:41.438526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:24:47.689 [2024-11-06 13:50:41.438536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.689 [2024-11-06 13:50:41.440550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.689 [2024-11-06 13:50:41.440587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.689 [2024-11-06 13:50:41.440601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:24:47.689 [2024-11-06 13:50:41.440611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.689 [2024-11-06 13:50:41.443855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.689 [2024-11-06 13:50:41.443893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.689 [2024-11-06 13:50:41.443921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:24:47.689 [2024-11-06 13:50:41.443932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.689 [2024-11-06 13:50:41.449456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.689 [2024-11-06 13:50:41.449487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:47.689 [2024-11-06 13:50:41.449498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.493 ms 00:24:47.689 [2024-11-06 13:50:41.449508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.689 [2024-11-06 13:50:41.484642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.689 [2024-11-06 13:50:41.484679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.689 [2024-11-06 13:50:41.484692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.084 ms 00:24:47.689 [2024-11-06 13:50:41.484701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.689 [2024-11-06 13:50:41.505501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.690 [2024-11-06 13:50:41.505543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.690 [2024-11-06 13:50:41.505561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.720 ms 00:24:47.690 [2024-11-06 13:50:41.505571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.690 [2024-11-06 13:50:41.505691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.690 [2024-11-06 13:50:41.505704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:47.690 [2024-11-06 13:50:41.505714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:47.690 [2024-11-06 13:50:41.505724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.690 [2024-11-06 13:50:41.541733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.690 [2024-11-06 13:50:41.541769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:47.690 [2024-11-06 13:50:41.541798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.979 ms 00:24:47.690 [2024-11-06 13:50:41.541808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.690 [2024-11-06 13:50:41.577629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.690 [2024-11-06 13:50:41.577665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:47.690 [2024-11-06 13:50:41.577694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.766 ms 00:24:47.690 [2024-11-06 13:50:41.577703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.690 [2024-11-06 13:50:41.612244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.690 [2024-11-06 13:50:41.612279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.690 [2024-11-06 13:50:41.612307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.485 ms 00:24:47.690 [2024-11-06 13:50:41.612317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.690 [2024-11-06 13:50:41.646613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.690 [2024-11-06 13:50:41.646647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.690 [2024-11-06 13:50:41.646676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.213 ms 00:24:47.690 [2024-11-06 13:50:41.646686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.690 [2024-11-06 13:50:41.646738] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.690 [2024-11-06 13:50:41.646756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.646998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.690 [2024-11-06 13:50:41.647415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.691 [2024-11-06 13:50:41.647844] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.691 [2024-11-06 13:50:41.647854] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03c79bca-64f5-43d2-9303-3144f18687a3 00:24:47.691 [2024-11-06 13:50:41.647865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:47.691 [2024-11-06 13:50:41.647875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:47.691 [2024-11-06 13:50:41.647885] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:47.691 [2024-11-06 13:50:41.647895] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:47.691 [2024-11-06 13:50:41.647905] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.691 [2024-11-06 13:50:41.647915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.691 [2024-11-06 13:50:41.647925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.691 [2024-11-06 13:50:41.647934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.691 [2024-11-06 13:50:41.647942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.691 [2024-11-06 13:50:41.647952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.691 [2024-11-06 13:50:41.647968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.691 [2024-11-06 13:50:41.647979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:24:47.691 [2024-11-06 13:50:41.647989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.691 [2024-11-06 13:50:41.669118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.691 [2024-11-06 13:50:41.669150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.691 [2024-11-06 13:50:41.669162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.109 ms 00:24:47.691 [2024-11-06 13:50:41.669172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.691 [2024-11-06 13:50:41.669871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.691 [2024-11-06 13:50:41.669889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.691 [2024-11-06 13:50:41.669901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:24:47.691 [2024-11-06 13:50:41.669912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-11-06 13:50:41.727357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.950 [2024-11-06 13:50:41.727391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.950 [2024-11-06 13:50:41.727421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.950 [2024-11-06 13:50:41.727432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-11-06 13:50:41.727542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.950 [2024-11-06 13:50:41.727555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.950 [2024-11-06 13:50:41.727566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.950 [2024-11-06 13:50:41.727577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-11-06 13:50:41.727631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.950 [2024-11-06 13:50:41.727644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.950 [2024-11-06 13:50:41.727655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.950 [2024-11-06 13:50:41.727665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-11-06 13:50:41.727686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.950 [2024-11-06 13:50:41.727702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.950 [2024-11-06 13:50:41.727713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.950 [2024-11-06 13:50:41.727739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.950 [2024-11-06 13:50:41.864082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.950 [2024-11-06 13:50:41.864196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.950 [2024-11-06 13:50:41.864216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.950 [2024-11-06 13:50:41.864228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.975841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.220 [2024-11-06 13:50:41.975948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.220 [2024-11-06 13:50:41.975967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.220 [2024-11-06 13:50:41.975978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.976131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.220 [2024-11-06 13:50:41.976145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.220 [2024-11-06 13:50:41.976157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.220 [2024-11-06 13:50:41.976170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.976204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.220 [2024-11-06 13:50:41.976216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.220 [2024-11-06 13:50:41.976238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.220 [2024-11-06 13:50:41.976249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.976386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.220 [2024-11-06 13:50:41.976401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.220 [2024-11-06 13:50:41.976429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.220 [2024-11-06 13:50:41.976440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.976485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.220 [2024-11-06 13:50:41.976498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:48.220 [2024-11-06 13:50:41.976515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.220 [2024-11-06 13:50:41.976526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.976578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.220 [2024-11-06 13:50:41.976589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.220 [2024-11-06 13:50:41.976600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.220 [2024-11-06 13:50:41.976610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.976666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.220 [2024-11-06 13:50:41.976679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.220 [2024-11-06 13:50:41.976696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.220 [2024-11-06 13:50:41.976706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.220 [2024-11-06 13:50:41.976887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.065 ms, result 0 00:24:49.207 00:24:49.207 00:24:49.207 13:50:43 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76503 00:24:49.207 13:50:43 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:49.207 13:50:43 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76503 00:24:49.207 13:50:43 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76503 ']' 00:24:49.207 13:50:43 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.207 13:50:43 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.207 13:50:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.207 13:50:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.207 13:50:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:49.466 [2024-11-06 13:50:43.317548] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:24:49.466 [2024-11-06 13:50:43.317724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76503 ] 00:24:49.725 [2024-11-06 13:50:43.503896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.725 [2024-11-06 13:50:43.636610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.100 13:50:44 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:51.100 13:50:44 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:24:51.100 13:50:44 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:51.100 [2024-11-06 13:50:44.931765] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.100 [2024-11-06 13:50:44.931853] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.360 [2024-11-06 13:50:45.124918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.124980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.360 [2024-11-06 13:50:45.125004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:51.360 [2024-11-06 13:50:45.125016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.129285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.129323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.360 [2024-11-06 13:50:45.129339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.235 ms 00:24:51.360 [2024-11-06 13:50:45.129350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.129461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.360 [2024-11-06 13:50:45.130510] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.360 [2024-11-06 13:50:45.130541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.130552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.360 [2024-11-06 13:50:45.130566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:24:51.360 [2024-11-06 13:50:45.130577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.133175] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:51.360 [2024-11-06 13:50:45.154317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.154388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:51.360 [2024-11-06 13:50:45.154405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.146 ms 00:24:51.360 [2024-11-06 13:50:45.154424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.154531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.154553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:51.360 [2024-11-06 13:50:45.154565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:51.360 [2024-11-06 13:50:45.154582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.167451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.167507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.360 [2024-11-06 13:50:45.167521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.804 ms 00:24:51.360 [2024-11-06 13:50:45.167538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.167725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.167748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.360 [2024-11-06 13:50:45.167760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:51.360 [2024-11-06 13:50:45.167776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.167815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.167833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.360 [2024-11-06 13:50:45.167844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:51.360 [2024-11-06 13:50:45.167862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.167891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:51.360 [2024-11-06 13:50:45.173985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.174026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.360 [2024-11-06 13:50:45.174044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.096 ms 00:24:51.360 [2024-11-06 13:50:45.174055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.174119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.174133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.360 [2024-11-06 13:50:45.174150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:51.360 [2024-11-06 13:50:45.174168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.174199] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:51.360 [2024-11-06 13:50:45.174226] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:51.360 [2024-11-06 13:50:45.174282] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:51.360 [2024-11-06 13:50:45.174304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:51.360 [2024-11-06 13:50:45.174429] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.360 [2024-11-06 13:50:45.174443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.360 [2024-11-06 13:50:45.174472] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.360 [2024-11-06 13:50:45.174486] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.360 [2024-11-06 13:50:45.174505] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.360 [2024-11-06 13:50:45.174517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:51.360 [2024-11-06 13:50:45.174534] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.360 [2024-11-06 13:50:45.174546] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.360 [2024-11-06 13:50:45.174568] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.360 [2024-11-06 13:50:45.174580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.174596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.360 [2024-11-06 13:50:45.174608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:24:51.360 [2024-11-06 13:50:45.174624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.174708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.360 [2024-11-06 13:50:45.174727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.360 [2024-11-06 13:50:45.174738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:51.360 [2024-11-06 13:50:45.174755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.360 [2024-11-06 13:50:45.174861] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.360 [2024-11-06 13:50:45.174883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.360 [2024-11-06 13:50:45.174895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.360 [2024-11-06 13:50:45.174912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.360 [2024-11-06 13:50:45.174924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.360 [2024-11-06 13:50:45.174939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.360 [2024-11-06 13:50:45.174950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:51.360 [2024-11-06 13:50:45.174974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.360 [2024-11-06 13:50:45.174984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:51.360 [2024-11-06 13:50:45.175000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.360 [2024-11-06 13:50:45.175010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.360 [2024-11-06 13:50:45.175037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:51.360 [2024-11-06 13:50:45.175047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.360 [2024-11-06 13:50:45.175063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.360 [2024-11-06 13:50:45.175073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:51.360 [2024-11-06 13:50:45.175089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.360 [2024-11-06 13:50:45.175099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.360 [2024-11-06 13:50:45.175115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:51.360 [2024-11-06 13:50:45.175125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.360 [2024-11-06 13:50:45.175141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.360 [2024-11-06 13:50:45.175163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:51.360 [2024-11-06 13:50:45.175178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.360 [2024-11-06 13:50:45.175189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.360 [2024-11-06 13:50:45.175210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:51.360 [2024-11-06 13:50:45.175220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.360 [2024-11-06 13:50:45.175235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.360 [2024-11-06 13:50:45.175245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:51.360 [2024-11-06 13:50:45.175261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.360 [2024-11-06 13:50:45.175271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.360 [2024-11-06 13:50:45.175286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:51.360 [2024-11-06 13:50:45.175296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.361 [2024-11-06 13:50:45.175313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.361 [2024-11-06 13:50:45.175323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:51.361 [2024-11-06 13:50:45.175339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.361 [2024-11-06 13:50:45.175349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.361 [2024-11-06 13:50:45.175364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:51.361 [2024-11-06 13:50:45.175373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.361 [2024-11-06 13:50:45.175389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.361 [2024-11-06 13:50:45.175401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:51.361 [2024-11-06 13:50:45.175422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.361 [2024-11-06 13:50:45.175432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.361 [2024-11-06 13:50:45.175448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:51.361 [2024-11-06 13:50:45.175458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.361 [2024-11-06 13:50:45.175473] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.361 [2024-11-06 13:50:45.175490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.361 [2024-11-06 13:50:45.175506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.361 [2024-11-06 13:50:45.175516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.361 [2024-11-06 13:50:45.175532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.361 [2024-11-06 13:50:45.175542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.361 [2024-11-06 13:50:45.175558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.361 [2024-11-06 13:50:45.175568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.361 [2024-11-06 13:50:45.175583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.361 [2024-11-06 13:50:45.175594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.361 [2024-11-06 13:50:45.175611] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.361 [2024-11-06 13:50:45.175624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.361 [2024-11-06 13:50:45.175649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:51.361 [2024-11-06 13:50:45.175661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:51.361 [2024-11-06 13:50:45.175677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:51.361 [2024-11-06 13:50:45.175689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:51.361 [2024-11-06 13:50:45.175705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:51.361 [2024-11-06 13:50:45.175717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:51.361 [2024-11-06 13:50:45.175733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:51.361 [2024-11-06 13:50:45.175744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:51.361 [2024-11-06 13:50:45.175760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:51.361 [2024-11-06 13:50:45.175771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:51.361 [2024-11-06 13:50:45.175787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:51.361 [2024-11-06 13:50:45.175798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:51.361 [2024-11-06 13:50:45.175814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:51.361 [2024-11-06 13:50:45.175825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:51.361 [2024-11-06 13:50:45.175841] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.361 [2024-11-06 13:50:45.175853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.361 [2024-11-06 13:50:45.175876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.361 [2024-11-06 13:50:45.175888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.361 [2024-11-06 13:50:45.175905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.361 [2024-11-06 13:50:45.175916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.361 [2024-11-06 13:50:45.175933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.175945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.361 [2024-11-06 13:50:45.175961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:24:51.361 [2024-11-06 13:50:45.175971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.361 [2024-11-06 13:50:45.228517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.228573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.361 [2024-11-06 13:50:45.228593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.450 ms 00:24:51.361 [2024-11-06 13:50:45.228609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.361 [2024-11-06 13:50:45.228810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.228825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.361 [2024-11-06 13:50:45.228840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:51.361 [2024-11-06 13:50:45.228851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.361 [2024-11-06 13:50:45.286448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.286503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.361 [2024-11-06 13:50:45.286525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.558 ms 00:24:51.361 [2024-11-06 13:50:45.286537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.361 [2024-11-06 13:50:45.286654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.286668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.361 [2024-11-06 13:50:45.286686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:51.361 [2024-11-06 13:50:45.286697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.361 [2024-11-06 13:50:45.287515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.287534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.361 [2024-11-06 13:50:45.287559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:24:51.361 [2024-11-06 13:50:45.287571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.361 [2024-11-06 13:50:45.287717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.287733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.361 [2024-11-06 13:50:45.287750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:24:51.361 [2024-11-06 13:50:45.287762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.361 [2024-11-06 13:50:45.315759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.361 [2024-11-06 13:50:45.315799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.361 [2024-11-06 13:50:45.315819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.961 ms 00:24:51.361 [2024-11-06 13:50:45.315831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.349193] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:51.620 [2024-11-06 13:50:45.349233] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.620 [2024-11-06 13:50:45.349256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.349268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.620 [2024-11-06 13:50:45.349286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.271 ms 00:24:51.620 [2024-11-06 13:50:45.349297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.380808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.380847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.620 [2024-11-06 13:50:45.380868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.413 ms 00:24:51.620 [2024-11-06 13:50:45.380880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.400139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.400195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.620 [2024-11-06 13:50:45.400222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.160 ms 00:24:51.620 [2024-11-06 13:50:45.400232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.418852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.418889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.620 [2024-11-06 13:50:45.418909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.531 ms 00:24:51.620 [2024-11-06 13:50:45.418919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.419775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.419800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.620 [2024-11-06 13:50:45.419818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:24:51.620 [2024-11-06 13:50:45.419830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.519512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.519579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:51.620 [2024-11-06 13:50:45.519602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.646 ms 00:24:51.620 [2024-11-06 13:50:45.519615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.531619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:51.620 [2024-11-06 13:50:45.558938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.559027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.620 [2024-11-06 13:50:45.559051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.200 ms 00:24:51.620 [2024-11-06 13:50:45.559067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.559244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.559262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:51.620 [2024-11-06 13:50:45.559275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:51.620 [2024-11-06 13:50:45.559289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.559361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.559377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.620 [2024-11-06 13:50:45.559389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:51.620 [2024-11-06 13:50:45.559407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.559436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.559452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.620 [2024-11-06 13:50:45.559463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:51.620 [2024-11-06 13:50:45.559480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.559521] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:51.620 [2024-11-06 13:50:45.559543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.559553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:51.620 [2024-11-06 13:50:45.559590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:51.620 [2024-11-06 13:50:45.559601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.620 [2024-11-06 13:50:45.599282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.620 [2024-11-06 13:50:45.599321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.620 [2024-11-06 13:50:45.599343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.636 ms 00:24:51.620 [2024-11-06 13:50:45.599355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.621 [2024-11-06 13:50:45.599484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.621 [2024-11-06 13:50:45.599499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.621 [2024-11-06 13:50:45.599516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:51.621 [2024-11-06 13:50:45.599534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.621 [2024-11-06 13:50:45.600917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.879 [2024-11-06 13:50:45.605498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 475.575 ms, result 0 00:24:51.879 [2024-11-06 13:50:45.606746] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:51.879 Some configs were skipped because the RPC state that can call them passed over. 00:24:51.879 13:50:45 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:52.137 [2024-11-06 13:50:45.903283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.137 [2024-11-06 13:50:45.903355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:52.137 [2024-11-06 13:50:45.903373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:24:52.137 [2024-11-06 13:50:45.903391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.137 [2024-11-06 13:50:45.903438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.705 ms, result 0 00:24:52.137 true 00:24:52.138 13:50:45 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:52.396 [2024-11-06 13:50:46.175310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.396 [2024-11-06 13:50:46.175384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:52.396 [2024-11-06 13:50:46.175408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:24:52.396 [2024-11-06 13:50:46.175421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.396 [2024-11-06 13:50:46.175477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.385 ms, result 0 00:24:52.396 true 00:24:52.396 13:50:46 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76503 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76503 ']' 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76503 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76503 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76503' 00:24:52.396 killing process with pid 76503 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76503 00:24:52.396 13:50:46 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76503 00:24:53.773 [2024-11-06 13:50:47.480585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.480683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:53.773 [2024-11-06 13:50:47.480701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:53.773 [2024-11-06 13:50:47.480714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.480743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:53.773 [2024-11-06 13:50:47.485479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.485514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:53.773 [2024-11-06 13:50:47.485533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:24:53.773 [2024-11-06 13:50:47.485543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.485815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.485828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:53.773 [2024-11-06 13:50:47.485842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:24:53.773 [2024-11-06 13:50:47.485852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.489133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.489167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:53.773 [2024-11-06 13:50:47.489185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:24:53.773 [2024-11-06 13:50:47.489197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.494978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.495011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:53.773 [2024-11-06 13:50:47.495035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.735 ms 00:24:53.773 [2024-11-06 13:50:47.495045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.510871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.510906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:53.773 [2024-11-06 13:50:47.510924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.751 ms 00:24:53.773 [2024-11-06 13:50:47.510946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.521768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.521807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:53.773 [2024-11-06 13:50:47.521823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.748 ms 00:24:53.773 [2024-11-06 13:50:47.521834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.521979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.521994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:53.773 [2024-11-06 13:50:47.522008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:53.773 [2024-11-06 13:50:47.522034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.537072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.537105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:53.773 [2024-11-06 13:50:47.537120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.011 ms 00:24:53.773 [2024-11-06 13:50:47.537129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.552103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.552137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:53.773 [2024-11-06 13:50:47.552162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.914 ms 00:24:53.773 [2024-11-06 13:50:47.552172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.566844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.566876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:53.773 [2024-11-06 13:50:47.566898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.612 ms 00:24:53.773 [2024-11-06 13:50:47.566908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.581846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.773 [2024-11-06 13:50:47.581879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:53.773 [2024-11-06 13:50:47.581897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.823 ms 00:24:53.773 [2024-11-06 13:50:47.581907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.773 [2024-11-06 13:50:47.581959] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:53.773 [2024-11-06 13:50:47.581977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.581997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:53.773 [2024-11-06 13:50:47.582149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.582997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:53.774 [2024-11-06 13:50:47.583337] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:53.774 [2024-11-06 13:50:47.583365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03c79bca-64f5-43d2-9303-3144f18687a3 00:24:53.775 [2024-11-06 13:50:47.583390] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:53.775 [2024-11-06 13:50:47.583414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:53.775 [2024-11-06 13:50:47.583424] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:53.775 [2024-11-06 13:50:47.583441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:53.775 [2024-11-06 13:50:47.583451] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:53.775 [2024-11-06 13:50:47.583467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:53.775 [2024-11-06 13:50:47.583477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:53.775 [2024-11-06 13:50:47.583502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:53.775 [2024-11-06 13:50:47.583512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:53.775 [2024-11-06 13:50:47.583544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.775 [2024-11-06 13:50:47.583555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:53.775 [2024-11-06 13:50:47.583571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:24:53.775 [2024-11-06 13:50:47.583581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.775 [2024-11-06 13:50:47.604155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.775 [2024-11-06 13:50:47.604187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:53.775 [2024-11-06 13:50:47.604212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.536 ms 00:24:53.775 [2024-11-06 13:50:47.604222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.775 [2024-11-06 13:50:47.604812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.775 [2024-11-06 13:50:47.604831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:53.775 [2024-11-06 13:50:47.604848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:24:53.775 [2024-11-06 13:50:47.604866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.775 [2024-11-06 13:50:47.677781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.775 [2024-11-06 13:50:47.677820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:53.775 [2024-11-06 13:50:47.677839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.775 [2024-11-06 13:50:47.677850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.775 [2024-11-06 13:50:47.677981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.775 [2024-11-06 13:50:47.677994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:53.775 [2024-11-06 13:50:47.678008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.775 [2024-11-06 13:50:47.678033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.775 [2024-11-06 13:50:47.678091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.775 [2024-11-06 13:50:47.678104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:53.775 [2024-11-06 13:50:47.678122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.775 [2024-11-06 13:50:47.678131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.775 [2024-11-06 13:50:47.678156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.775 [2024-11-06 13:50:47.678166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:53.775 [2024-11-06 13:50:47.678181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.775 [2024-11-06 13:50:47.678191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.809229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.809305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.034 [2024-11-06 13:50:47.809327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.809338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.911938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.912037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.034 [2024-11-06 13:50:47.912061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.912080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.912233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.912247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.034 [2024-11-06 13:50:47.912270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.912281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.912318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.912329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.034 [2024-11-06 13:50:47.912343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.912353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.912490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.912503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.034 [2024-11-06 13:50:47.912517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.912527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.912573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.912585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:54.034 [2024-11-06 13:50:47.912599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.912608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.912664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.912675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.034 [2024-11-06 13:50:47.912692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.912702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.912758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.034 [2024-11-06 13:50:47.912769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.034 [2024-11-06 13:50:47.912785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.034 [2024-11-06 13:50:47.912795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.034 [2024-11-06 13:50:47.912969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 432.346 ms, result 0 00:24:55.412 13:50:49 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:55.412 [2024-11-06 13:50:49.160138] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:24:55.412 [2024-11-06 13:50:49.160321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76578 ] 00:24:55.412 [2024-11-06 13:50:49.365729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.671 [2024-11-06 13:50:49.510438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.240 [2024-11-06 13:50:49.935070] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:56.240 [2024-11-06 13:50:49.935152] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:56.240 [2024-11-06 13:50:50.103649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.240 [2024-11-06 13:50:50.103739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:56.240 [2024-11-06 13:50:50.103757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:56.240 [2024-11-06 13:50:50.103769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.240 [2024-11-06 13:50:50.107400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.240 [2024-11-06 13:50:50.107440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.240 [2024-11-06 13:50:50.107454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:24:56.240 [2024-11-06 13:50:50.107465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.240 [2024-11-06 13:50:50.107573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:56.240 [2024-11-06 13:50:50.108571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:56.240 [2024-11-06 13:50:50.108601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.240 [2024-11-06 13:50:50.108613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.240 [2024-11-06 13:50:50.108625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:24:56.240 [2024-11-06 13:50:50.108636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.240 [2024-11-06 13:50:50.111349] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:56.241 [2024-11-06 13:50:50.130865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.130907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:56.241 [2024-11-06 13:50:50.130923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.517 ms 00:24:56.241 [2024-11-06 13:50:50.130934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.131053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.131074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:56.241 [2024-11-06 13:50:50.131087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:56.241 [2024-11-06 13:50:50.131097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.143970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.144000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.241 [2024-11-06 13:50:50.144028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.827 ms 00:24:56.241 [2024-11-06 13:50:50.144047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.144178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.144194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.241 [2024-11-06 13:50:50.144206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:56.241 [2024-11-06 13:50:50.144217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.144249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.144266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:56.241 [2024-11-06 13:50:50.144278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:56.241 [2024-11-06 13:50:50.144288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.144315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:56.241 [2024-11-06 13:50:50.150136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.150167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.241 [2024-11-06 13:50:50.150196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:24:56.241 [2024-11-06 13:50:50.150206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.150261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.150274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:56.241 [2024-11-06 13:50:50.150285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:56.241 [2024-11-06 13:50:50.150295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.150318] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:56.241 [2024-11-06 13:50:50.150355] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:56.241 [2024-11-06 13:50:50.150411] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:56.241 [2024-11-06 13:50:50.150433] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:56.241 [2024-11-06 13:50:50.150529] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:56.241 [2024-11-06 13:50:50.150544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:56.241 [2024-11-06 13:50:50.150557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:56.241 [2024-11-06 13:50:50.150571] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:56.241 [2024-11-06 13:50:50.150588] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:56.241 [2024-11-06 13:50:50.150601] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:56.241 [2024-11-06 13:50:50.150612] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:56.241 [2024-11-06 13:50:50.150623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:56.241 [2024-11-06 13:50:50.150634] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:56.241 [2024-11-06 13:50:50.150645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.150656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:56.241 [2024-11-06 13:50:50.150668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:24:56.241 [2024-11-06 13:50:50.150678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.150758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.241 [2024-11-06 13:50:50.150775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:56.241 [2024-11-06 13:50:50.150786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:56.241 [2024-11-06 13:50:50.150797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.241 [2024-11-06 13:50:50.150892] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:56.241 [2024-11-06 13:50:50.150904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:56.241 [2024-11-06 13:50:50.150916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:56.241 [2024-11-06 13:50:50.150926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.241 [2024-11-06 13:50:50.150938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:56.241 [2024-11-06 13:50:50.150948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:56.241 [2024-11-06 13:50:50.150959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:56.241 [2024-11-06 13:50:50.150970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:56.241 [2024-11-06 13:50:50.150980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:56.241 [2024-11-06 13:50:50.150989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:56.241 [2024-11-06 13:50:50.150999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:56.241 [2024-11-06 13:50:50.151009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:56.241 [2024-11-06 13:50:50.151019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:56.241 [2024-11-06 13:50:50.151056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:56.241 [2024-11-06 13:50:50.151067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:56.241 [2024-11-06 13:50:50.151077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:56.241 [2024-11-06 13:50:50.151097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:56.241 [2024-11-06 13:50:50.151106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:56.241 [2024-11-06 13:50:50.151127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.241 [2024-11-06 13:50:50.151147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:56.241 [2024-11-06 13:50:50.151156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.241 [2024-11-06 13:50:50.151176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:56.241 [2024-11-06 13:50:50.151186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.241 [2024-11-06 13:50:50.151205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:56.241 [2024-11-06 13:50:50.151214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.241 [2024-11-06 13:50:50.151232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:56.241 [2024-11-06 13:50:50.151242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:56.241 [2024-11-06 13:50:50.151260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:56.241 [2024-11-06 13:50:50.151269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:56.241 [2024-11-06 13:50:50.151279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:56.241 [2024-11-06 13:50:50.151289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:56.241 [2024-11-06 13:50:50.151298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:56.241 [2024-11-06 13:50:50.151307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:56.241 [2024-11-06 13:50:50.151326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:56.241 [2024-11-06 13:50:50.151335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151344] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:56.241 [2024-11-06 13:50:50.151354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:56.241 [2024-11-06 13:50:50.151364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:56.241 [2024-11-06 13:50:50.151379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.241 [2024-11-06 13:50:50.151389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:56.241 [2024-11-06 13:50:50.151399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:56.241 [2024-11-06 13:50:50.151408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:56.241 [2024-11-06 13:50:50.151417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:56.242 [2024-11-06 13:50:50.151426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:56.242 [2024-11-06 13:50:50.151437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:56.242 [2024-11-06 13:50:50.151449] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:56.242 [2024-11-06 13:50:50.151462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.242 [2024-11-06 13:50:50.151474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:56.242 [2024-11-06 13:50:50.151485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:56.242 [2024-11-06 13:50:50.151496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:56.242 [2024-11-06 13:50:50.151507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:56.242 [2024-11-06 13:50:50.151518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:56.242 [2024-11-06 13:50:50.151528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:56.242 [2024-11-06 13:50:50.151539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:56.242 [2024-11-06 13:50:50.151550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:56.242 [2024-11-06 13:50:50.151560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:56.242 [2024-11-06 13:50:50.151571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:56.242 [2024-11-06 13:50:50.151581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:56.242 [2024-11-06 13:50:50.151592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:56.242 [2024-11-06 13:50:50.151602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:56.242 [2024-11-06 13:50:50.151612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:56.242 [2024-11-06 13:50:50.151622] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:56.242 [2024-11-06 13:50:50.151635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.242 [2024-11-06 13:50:50.151646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:56.242 [2024-11-06 13:50:50.151656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:56.242 [2024-11-06 13:50:50.151667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:56.242 [2024-11-06 13:50:50.151678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:56.242 [2024-11-06 13:50:50.151689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.242 [2024-11-06 13:50:50.151700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:56.242 [2024-11-06 13:50:50.151716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:24:56.242 [2024-11-06 13:50:50.151726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.242 [2024-11-06 13:50:50.200847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.242 [2024-11-06 13:50:50.200910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.242 [2024-11-06 13:50:50.200926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.058 ms 00:24:56.242 [2024-11-06 13:50:50.200938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.242 [2024-11-06 13:50:50.201153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.242 [2024-11-06 13:50:50.201168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:56.242 [2024-11-06 13:50:50.201180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:56.242 [2024-11-06 13:50:50.201191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.500 [2024-11-06 13:50:50.262926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.500 [2024-11-06 13:50:50.262970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.500 [2024-11-06 13:50:50.262989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.708 ms 00:24:56.500 [2024-11-06 13:50:50.263000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.263116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.263130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.501 [2024-11-06 13:50:50.263143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:56.501 [2024-11-06 13:50:50.263154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.263938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.263959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.501 [2024-11-06 13:50:50.263972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:24:56.501 [2024-11-06 13:50:50.263991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.264141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.264162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.501 [2024-11-06 13:50:50.264173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:56.501 [2024-11-06 13:50:50.264184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.288488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.288530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.501 [2024-11-06 13:50:50.288546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.278 ms 00:24:56.501 [2024-11-06 13:50:50.288557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.309427] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:56.501 [2024-11-06 13:50:50.309468] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:56.501 [2024-11-06 13:50:50.309484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.309496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:56.501 [2024-11-06 13:50:50.309509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.779 ms 00:24:56.501 [2024-11-06 13:50:50.309519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.340366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.340417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:56.501 [2024-11-06 13:50:50.340447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.743 ms 00:24:56.501 [2024-11-06 13:50:50.340458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.358931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.358970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:56.501 [2024-11-06 13:50:50.358984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.386 ms 00:24:56.501 [2024-11-06 13:50:50.358995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.377134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.377171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:56.501 [2024-11-06 13:50:50.377184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.049 ms 00:24:56.501 [2024-11-06 13:50:50.377194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.378019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.378065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:56.501 [2024-11-06 13:50:50.378078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:24:56.501 [2024-11-06 13:50:50.378090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.501 [2024-11-06 13:50:50.475352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.501 [2024-11-06 13:50:50.475437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:56.501 [2024-11-06 13:50:50.475457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.228 ms 00:24:56.501 [2024-11-06 13:50:50.475469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.486839] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:56.760 [2024-11-06 13:50:50.513884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.760 [2024-11-06 13:50:50.513974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:56.760 [2024-11-06 13:50:50.513994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.242 ms 00:24:56.760 [2024-11-06 13:50:50.514014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.514186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.760 [2024-11-06 13:50:50.514202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:56.760 [2024-11-06 13:50:50.514215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:56.760 [2024-11-06 13:50:50.514226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.514296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.760 [2024-11-06 13:50:50.514308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:56.760 [2024-11-06 13:50:50.514320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:56.760 [2024-11-06 13:50:50.514331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.514408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.760 [2024-11-06 13:50:50.514423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:56.760 [2024-11-06 13:50:50.514435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:56.760 [2024-11-06 13:50:50.514446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.514492] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:56.760 [2024-11-06 13:50:50.514506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.760 [2024-11-06 13:50:50.514517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:56.760 [2024-11-06 13:50:50.514528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:56.760 [2024-11-06 13:50:50.514539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.552645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.760 [2024-11-06 13:50:50.552688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:56.760 [2024-11-06 13:50:50.552718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.081 ms 00:24:56.760 [2024-11-06 13:50:50.552730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.552855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.760 [2024-11-06 13:50:50.552871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:56.760 [2024-11-06 13:50:50.552883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:56.760 [2024-11-06 13:50:50.552894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.760 [2024-11-06 13:50:50.554187] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:56.760 [2024-11-06 13:50:50.558766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 450.171 ms, result 0 00:24:56.760 [2024-11-06 13:50:50.559621] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:56.760 [2024-11-06 13:50:50.578338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:57.697  [2024-11-06T13:50:53.058Z] Copying: 32/256 [MB] (32 MBps) [2024-11-06T13:50:53.993Z] Copying: 59/256 [MB] (27 MBps) [2024-11-06T13:50:54.930Z] Copying: 86/256 [MB] (27 MBps) [2024-11-06T13:50:55.864Z] Copying: 114/256 [MB] (27 MBps) [2024-11-06T13:50:56.798Z] Copying: 141/256 [MB] (27 MBps) [2024-11-06T13:50:57.735Z] Copying: 168/256 [MB] (27 MBps) [2024-11-06T13:50:58.670Z] Copying: 195/256 [MB] (27 MBps) [2024-11-06T13:51:00.047Z] Copying: 222/256 [MB] (27 MBps) [2024-11-06T13:51:00.047Z] Copying: 250/256 [MB] (27 MBps) [2024-11-06T13:51:00.306Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-06 13:51:00.067856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:06.323 [2024-11-06 13:51:00.090734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.090806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:06.323 [2024-11-06 13:51:00.090836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:06.323 [2024-11-06 13:51:00.090865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.090913] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:06.323 [2024-11-06 13:51:00.096004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.096059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:06.323 [2024-11-06 13:51:00.096081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.060 ms 00:25:06.323 [2024-11-06 13:51:00.096098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.096481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.096527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:06.323 [2024-11-06 13:51:00.096550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:25:06.323 [2024-11-06 13:51:00.096568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.099957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.100005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:06.323 [2024-11-06 13:51:00.100036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.350 ms 00:25:06.323 [2024-11-06 13:51:00.100054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.106118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.106159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:06.323 [2024-11-06 13:51:00.106196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.021 ms 00:25:06.323 [2024-11-06 13:51:00.106213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.145374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.145436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:06.323 [2024-11-06 13:51:00.145460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.042 ms 00:25:06.323 [2024-11-06 13:51:00.145477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.168604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.168675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:06.323 [2024-11-06 13:51:00.168706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.039 ms 00:25:06.323 [2024-11-06 13:51:00.168722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.168962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.169007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:06.323 [2024-11-06 13:51:00.169050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:25:06.323 [2024-11-06 13:51:00.169063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.206064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.206111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:06.323 [2024-11-06 13:51:00.206134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.952 ms 00:25:06.323 [2024-11-06 13:51:00.206150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.244003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.244055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:06.323 [2024-11-06 13:51:00.244079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.771 ms 00:25:06.323 [2024-11-06 13:51:00.244094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.323 [2024-11-06 13:51:00.280840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.323 [2024-11-06 13:51:00.280880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:06.323 [2024-11-06 13:51:00.280917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.669 ms 00:25:06.323 [2024-11-06 13:51:00.280933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.583 [2024-11-06 13:51:00.316629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.583 [2024-11-06 13:51:00.316669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:06.583 [2024-11-06 13:51:00.316691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.533 ms 00:25:06.583 [2024-11-06 13:51:00.316707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.583 [2024-11-06 13:51:00.316780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:06.583 [2024-11-06 13:51:00.316808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.316991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:06.583 [2024-11-06 13:51:00.317936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.317956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.317978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.317996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:06.584 [2024-11-06 13:51:00.318771] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:06.584 [2024-11-06 13:51:00.318785] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03c79bca-64f5-43d2-9303-3144f18687a3 00:25:06.584 [2024-11-06 13:51:00.318800] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:06.584 [2024-11-06 13:51:00.318814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:06.584 [2024-11-06 13:51:00.318829] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:06.584 [2024-11-06 13:51:00.318849] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:06.584 [2024-11-06 13:51:00.318863] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:06.584 [2024-11-06 13:51:00.318883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:06.584 [2024-11-06 13:51:00.318904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:06.584 [2024-11-06 13:51:00.318923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:06.584 [2024-11-06 13:51:00.318941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:06.584 [2024-11-06 13:51:00.318961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.584 [2024-11-06 13:51:00.318982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:06.584 [2024-11-06 13:51:00.318996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:25:06.584 [2024-11-06 13:51:00.319010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.584 [2024-11-06 13:51:00.340953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.584 [2024-11-06 13:51:00.340989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:06.584 [2024-11-06 13:51:00.341028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.888 ms 00:25:06.584 [2024-11-06 13:51:00.341065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.584 [2024-11-06 13:51:00.341760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.584 [2024-11-06 13:51:00.341793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:06.584 [2024-11-06 13:51:00.341814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:25:06.584 [2024-11-06 13:51:00.341831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.584 [2024-11-06 13:51:00.400622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.584 [2024-11-06 13:51:00.400663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.584 [2024-11-06 13:51:00.400701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.584 [2024-11-06 13:51:00.400719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.584 [2024-11-06 13:51:00.400893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.584 [2024-11-06 13:51:00.400915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.584 [2024-11-06 13:51:00.400933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.584 [2024-11-06 13:51:00.400951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.584 [2024-11-06 13:51:00.401065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.584 [2024-11-06 13:51:00.401086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.584 [2024-11-06 13:51:00.401105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.584 [2024-11-06 13:51:00.401118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.584 [2024-11-06 13:51:00.401152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.584 [2024-11-06 13:51:00.401179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.584 [2024-11-06 13:51:00.401199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.584 [2024-11-06 13:51:00.401219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.584 [2024-11-06 13:51:00.537539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.584 [2024-11-06 13:51:00.537619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.584 [2024-11-06 13:51:00.537646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.584 [2024-11-06 13:51:00.537665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.843 [2024-11-06 13:51:00.646746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.843 [2024-11-06 13:51:00.646829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.843 [2024-11-06 13:51:00.646857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.843 [2024-11-06 13:51:00.646874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.843 [2024-11-06 13:51:00.647065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.843 [2024-11-06 13:51:00.647087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.843 [2024-11-06 13:51:00.647105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.843 [2024-11-06 13:51:00.647120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.843 [2024-11-06 13:51:00.647172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.843 [2024-11-06 13:51:00.647193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.843 [2024-11-06 13:51:00.647221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.843 [2024-11-06 13:51:00.647238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.843 [2024-11-06 13:51:00.647400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.843 [2024-11-06 13:51:00.647418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.843 [2024-11-06 13:51:00.647430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.843 [2024-11-06 13:51:00.647441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.843 [2024-11-06 13:51:00.647499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.843 [2024-11-06 13:51:00.647522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:06.843 [2024-11-06 13:51:00.647541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.843 [2024-11-06 13:51:00.647567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.843 [2024-11-06 13:51:00.647634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.843 [2024-11-06 13:51:00.647652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.843 [2024-11-06 13:51:00.647671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.844 [2024-11-06 13:51:00.647692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.844 [2024-11-06 13:51:00.647776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.844 [2024-11-06 13:51:00.647795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.844 [2024-11-06 13:51:00.647821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.844 [2024-11-06 13:51:00.647841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.844 [2024-11-06 13:51:00.648108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 557.376 ms, result 0 00:25:08.220 00:25:08.220 00:25:08.220 13:51:01 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:08.490 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:08.490 13:51:02 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:08.490 13:51:02 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:08.490 13:51:02 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:08.490 13:51:02 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:08.490 13:51:02 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:08.490 13:51:02 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:08.490 13:51:02 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76503 00:25:08.490 13:51:02 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76503 ']' 00:25:08.490 13:51:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76503 00:25:08.490 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76503) - No such process 00:25:08.490 Process with pid 76503 is not found 00:25:08.490 13:51:02 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 76503 is not found' 00:25:08.490 ************************************ 00:25:08.490 END TEST ftl_trim 00:25:08.490 ************************************ 00:25:08.490 00:25:08.490 real 1m11.196s 00:25:08.490 user 1m39.513s 00:25:08.490 sys 0m8.855s 00:25:08.490 13:51:02 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:08.490 13:51:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:08.779 13:51:02 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:08.779 13:51:02 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:08.779 13:51:02 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:08.779 13:51:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:08.779 ************************************ 00:25:08.779 START TEST ftl_restore 00:25:08.779 ************************************ 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:08.779 * Looking for test storage... 00:25:08.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.779 13:51:02 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:08.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.779 --rc genhtml_branch_coverage=1 00:25:08.779 --rc genhtml_function_coverage=1 00:25:08.779 --rc genhtml_legend=1 00:25:08.779 --rc geninfo_all_blocks=1 00:25:08.779 --rc geninfo_unexecuted_blocks=1 00:25:08.779 00:25:08.779 ' 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:08.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.779 --rc genhtml_branch_coverage=1 00:25:08.779 --rc genhtml_function_coverage=1 00:25:08.779 --rc genhtml_legend=1 00:25:08.779 --rc geninfo_all_blocks=1 00:25:08.779 --rc geninfo_unexecuted_blocks=1 00:25:08.779 00:25:08.779 ' 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:08.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.779 --rc genhtml_branch_coverage=1 00:25:08.779 --rc genhtml_function_coverage=1 00:25:08.779 --rc genhtml_legend=1 00:25:08.779 --rc geninfo_all_blocks=1 00:25:08.779 --rc geninfo_unexecuted_blocks=1 00:25:08.779 00:25:08.779 ' 00:25:08.779 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:08.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.779 --rc genhtml_branch_coverage=1 00:25:08.779 --rc genhtml_function_coverage=1 00:25:08.779 --rc genhtml_legend=1 00:25:08.779 --rc geninfo_all_blocks=1 00:25:08.779 --rc geninfo_unexecuted_blocks=1 00:25:08.779 00:25:08.779 ' 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.viM3syzN7K 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:08.779 13:51:02 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:08.780 13:51:02 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:08.780 13:51:02 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:08.780 13:51:02 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:08.780 13:51:02 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76784 00:25:08.780 13:51:02 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76784 00:25:08.780 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 76784 ']' 00:25:08.780 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.780 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:08.780 13:51:02 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:08.780 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.780 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:08.780 13:51:02 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:09.039 [2024-11-06 13:51:02.887457] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:25:09.039 [2024-11-06 13:51:02.887665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76784 ] 00:25:09.297 [2024-11-06 13:51:03.097814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.556 [2024-11-06 13:51:03.289788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.492 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:10.492 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:25:10.492 13:51:04 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:10.492 13:51:04 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:10.492 13:51:04 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:10.492 13:51:04 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:10.492 13:51:04 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:10.492 13:51:04 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:10.751 13:51:04 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:10.751 13:51:04 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:10.751 13:51:04 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:10.751 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:25:10.751 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:10.751 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:25:10.751 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:25:10.751 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:11.011 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:11.011 { 00:25:11.011 "name": "nvme0n1", 00:25:11.011 "aliases": [ 00:25:11.011 "8d8d9ac1-53c7-4486-9065-f4059d71732e" 00:25:11.011 ], 00:25:11.011 "product_name": "NVMe disk", 00:25:11.011 "block_size": 4096, 00:25:11.011 "num_blocks": 1310720, 00:25:11.011 "uuid": "8d8d9ac1-53c7-4486-9065-f4059d71732e", 00:25:11.011 "numa_id": -1, 00:25:11.011 "assigned_rate_limits": { 00:25:11.011 "rw_ios_per_sec": 0, 00:25:11.011 "rw_mbytes_per_sec": 0, 00:25:11.011 "r_mbytes_per_sec": 0, 00:25:11.011 "w_mbytes_per_sec": 0 00:25:11.011 }, 00:25:11.011 "claimed": true, 00:25:11.011 "claim_type": "read_many_write_one", 00:25:11.011 "zoned": false, 00:25:11.011 "supported_io_types": { 00:25:11.011 "read": true, 00:25:11.011 "write": true, 00:25:11.011 "unmap": true, 00:25:11.011 "flush": true, 00:25:11.011 "reset": true, 00:25:11.011 "nvme_admin": true, 00:25:11.011 "nvme_io": true, 00:25:11.011 "nvme_io_md": false, 00:25:11.011 "write_zeroes": true, 00:25:11.011 "zcopy": false, 00:25:11.011 "get_zone_info": false, 00:25:11.011 "zone_management": false, 00:25:11.011 "zone_append": false, 00:25:11.011 "compare": true, 00:25:11.011 "compare_and_write": false, 00:25:11.011 "abort": true, 00:25:11.011 "seek_hole": false, 00:25:11.011 "seek_data": false, 00:25:11.011 "copy": true, 00:25:11.011 "nvme_iov_md": false 00:25:11.011 }, 00:25:11.011 "driver_specific": { 00:25:11.011 "nvme": [ 00:25:11.011 { 00:25:11.011 "pci_address": "0000:00:11.0", 00:25:11.011 "trid": { 00:25:11.011 "trtype": "PCIe", 00:25:11.011 "traddr": "0000:00:11.0" 00:25:11.011 }, 00:25:11.011 "ctrlr_data": { 00:25:11.011 "cntlid": 0, 00:25:11.011 "vendor_id": "0x1b36", 00:25:11.011 "model_number": "QEMU NVMe Ctrl", 00:25:11.011 "serial_number": "12341", 00:25:11.011 "firmware_revision": "8.0.0", 00:25:11.011 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:11.011 "oacs": { 00:25:11.011 "security": 0, 00:25:11.011 "format": 1, 00:25:11.011 "firmware": 0, 00:25:11.011 "ns_manage": 1 00:25:11.011 }, 00:25:11.011 "multi_ctrlr": false, 00:25:11.011 "ana_reporting": false 00:25:11.011 }, 00:25:11.011 "vs": { 00:25:11.011 "nvme_version": "1.4" 00:25:11.011 }, 00:25:11.011 "ns_data": { 00:25:11.011 "id": 1, 00:25:11.011 "can_share": false 00:25:11.011 } 00:25:11.011 } 00:25:11.011 ], 00:25:11.011 "mp_policy": "active_passive" 00:25:11.011 } 00:25:11.011 } 00:25:11.011 ]' 00:25:11.011 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:11.011 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:25:11.011 13:51:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:11.270 13:51:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:25:11.270 13:51:05 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:25:11.270 13:51:05 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:25:11.270 13:51:05 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:11.270 13:51:05 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:11.270 13:51:05 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:11.270 13:51:05 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:11.270 13:51:05 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:11.529 13:51:05 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1f7bd5ff-d486-441e-a5d5-203f3022a70d 00:25:11.529 13:51:05 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:11.529 13:51:05 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f7bd5ff-d486-441e-a5d5-203f3022a70d 00:25:11.787 13:51:05 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:12.046 13:51:05 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3331a87c-b63d-484b-8205-ddbae3293295 00:25:12.046 13:51:05 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3331a87c-b63d-484b-8205-ddbae3293295 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:12.305 13:51:06 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.305 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.305 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:12.305 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:25:12.305 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:25:12.305 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.305 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:12.305 { 00:25:12.305 "name": "a9a93bd4-0069-4924-b410-a00e87042a41", 00:25:12.305 "aliases": [ 00:25:12.305 "lvs/nvme0n1p0" 00:25:12.305 ], 00:25:12.305 "product_name": "Logical Volume", 00:25:12.305 "block_size": 4096, 00:25:12.305 "num_blocks": 26476544, 00:25:12.305 "uuid": "a9a93bd4-0069-4924-b410-a00e87042a41", 00:25:12.305 "assigned_rate_limits": { 00:25:12.305 "rw_ios_per_sec": 0, 00:25:12.305 "rw_mbytes_per_sec": 0, 00:25:12.305 "r_mbytes_per_sec": 0, 00:25:12.305 "w_mbytes_per_sec": 0 00:25:12.305 }, 00:25:12.305 "claimed": false, 00:25:12.305 "zoned": false, 00:25:12.305 "supported_io_types": { 00:25:12.305 "read": true, 00:25:12.305 "write": true, 00:25:12.305 "unmap": true, 00:25:12.305 "flush": false, 00:25:12.305 "reset": true, 00:25:12.305 "nvme_admin": false, 00:25:12.305 "nvme_io": false, 00:25:12.305 "nvme_io_md": false, 00:25:12.305 "write_zeroes": true, 00:25:12.305 "zcopy": false, 00:25:12.305 "get_zone_info": false, 00:25:12.305 "zone_management": false, 00:25:12.305 "zone_append": false, 00:25:12.305 "compare": false, 00:25:12.305 "compare_and_write": false, 00:25:12.305 "abort": false, 00:25:12.305 "seek_hole": true, 00:25:12.305 "seek_data": true, 00:25:12.305 "copy": false, 00:25:12.305 "nvme_iov_md": false 00:25:12.305 }, 00:25:12.305 "driver_specific": { 00:25:12.305 "lvol": { 00:25:12.305 "lvol_store_uuid": "3331a87c-b63d-484b-8205-ddbae3293295", 00:25:12.305 "base_bdev": "nvme0n1", 00:25:12.305 "thin_provision": true, 00:25:12.305 "num_allocated_clusters": 0, 00:25:12.305 "snapshot": false, 00:25:12.305 "clone": false, 00:25:12.305 "esnap_clone": false 00:25:12.305 } 00:25:12.305 } 00:25:12.305 } 00:25:12.305 ]' 00:25:12.305 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:12.564 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:25:12.564 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:12.564 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:25:12.564 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:25:12.564 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:25:12.564 13:51:06 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:12.564 13:51:06 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:12.564 13:51:06 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:12.823 13:51:06 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:12.823 13:51:06 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:12.823 13:51:06 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.823 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=a9a93bd4-0069-4924-b410-a00e87042a41 00:25:12.823 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:12.823 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:25:12.823 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:25:12.823 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9a93bd4-0069-4924-b410-a00e87042a41 00:25:13.082 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:13.082 { 00:25:13.082 "name": "a9a93bd4-0069-4924-b410-a00e87042a41", 00:25:13.082 "aliases": [ 00:25:13.082 "lvs/nvme0n1p0" 00:25:13.082 ], 00:25:13.082 "product_name": "Logical Volume", 00:25:13.082 "block_size": 4096, 00:25:13.082 "num_blocks": 26476544, 00:25:13.082 "uuid": "a9a93bd4-0069-4924-b410-a00e87042a41", 00:25:13.082 "assigned_rate_limits": { 00:25:13.082 "rw_ios_per_sec": 0, 00:25:13.082 "rw_mbytes_per_sec": 0, 00:25:13.082 "r_mbytes_per_sec": 0, 00:25:13.082 "w_mbytes_per_sec": 0 00:25:13.082 }, 00:25:13.082 "claimed": false, 00:25:13.082 "zoned": false, 00:25:13.082 "supported_io_types": { 00:25:13.082 "read": true, 00:25:13.082 "write": true, 00:25:13.082 "unmap": true, 00:25:13.082 "flush": false, 00:25:13.082 "reset": true, 00:25:13.082 "nvme_admin": false, 00:25:13.082 "nvme_io": false, 00:25:13.082 "nvme_io_md": false, 00:25:13.082 "write_zeroes": true, 00:25:13.082 "zcopy": false, 00:25:13.082 "get_zone_info": false, 00:25:13.082 "zone_management": false, 00:25:13.082 "zone_append": false, 00:25:13.082 "compare": false, 00:25:13.082 "compare_and_write": false, 00:25:13.082 "abort": false, 00:25:13.082 "seek_hole": true, 00:25:13.082 "seek_data": true, 00:25:13.082 "copy": false, 00:25:13.082 "nvme_iov_md": false 00:25:13.082 }, 00:25:13.082 "driver_specific": { 00:25:13.082 "lvol": { 00:25:13.082 "lvol_store_uuid": "3331a87c-b63d-484b-8205-ddbae3293295", 00:25:13.082 "base_bdev": "nvme0n1", 00:25:13.082 "thin_provision": true, 00:25:13.082 "num_allocated_clusters": 0, 00:25:13.082 "snapshot": false, 00:25:13.082 "clone": false, 00:25:13.082 "esnap_clone": false 00:25:13.082 } 00:25:13.082 } 00:25:13.082 } 00:25:13.082 ]' 00:25:13.082 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:13.082 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:25:13.082 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:13.082 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:25:13.082 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:25:13.082 13:51:06 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:25:13.082 13:51:06 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:13.082 13:51:06 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:13.340 13:51:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:13.340 13:51:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a9a93bd4-0069-4924-b410-a00e87042a41 00:25:13.340 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=a9a93bd4-0069-4924-b410-a00e87042a41 00:25:13.340 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:13.340 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:25:13.340 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:25:13.340 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9a93bd4-0069-4924-b410-a00e87042a41 00:25:13.599 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:13.599 { 00:25:13.599 "name": "a9a93bd4-0069-4924-b410-a00e87042a41", 00:25:13.599 "aliases": [ 00:25:13.599 "lvs/nvme0n1p0" 00:25:13.599 ], 00:25:13.599 "product_name": "Logical Volume", 00:25:13.599 "block_size": 4096, 00:25:13.599 "num_blocks": 26476544, 00:25:13.599 "uuid": "a9a93bd4-0069-4924-b410-a00e87042a41", 00:25:13.599 "assigned_rate_limits": { 00:25:13.599 "rw_ios_per_sec": 0, 00:25:13.599 "rw_mbytes_per_sec": 0, 00:25:13.599 "r_mbytes_per_sec": 0, 00:25:13.599 "w_mbytes_per_sec": 0 00:25:13.599 }, 00:25:13.599 "claimed": false, 00:25:13.599 "zoned": false, 00:25:13.599 "supported_io_types": { 00:25:13.599 "read": true, 00:25:13.599 "write": true, 00:25:13.599 "unmap": true, 00:25:13.599 "flush": false, 00:25:13.599 "reset": true, 00:25:13.599 "nvme_admin": false, 00:25:13.599 "nvme_io": false, 00:25:13.599 "nvme_io_md": false, 00:25:13.599 "write_zeroes": true, 00:25:13.599 "zcopy": false, 00:25:13.599 "get_zone_info": false, 00:25:13.599 "zone_management": false, 00:25:13.599 "zone_append": false, 00:25:13.599 "compare": false, 00:25:13.599 "compare_and_write": false, 00:25:13.599 "abort": false, 00:25:13.599 "seek_hole": true, 00:25:13.599 "seek_data": true, 00:25:13.599 "copy": false, 00:25:13.599 "nvme_iov_md": false 00:25:13.599 }, 00:25:13.599 "driver_specific": { 00:25:13.599 "lvol": { 00:25:13.599 "lvol_store_uuid": "3331a87c-b63d-484b-8205-ddbae3293295", 00:25:13.599 "base_bdev": "nvme0n1", 00:25:13.599 "thin_provision": true, 00:25:13.599 "num_allocated_clusters": 0, 00:25:13.599 "snapshot": false, 00:25:13.599 "clone": false, 00:25:13.599 "esnap_clone": false 00:25:13.599 } 00:25:13.599 } 00:25:13.599 } 00:25:13.599 ]' 00:25:13.599 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:13.599 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:25:13.599 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:13.599 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:25:13.599 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:25:13.599 13:51:07 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:25:13.599 13:51:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:13.599 13:51:07 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a9a93bd4-0069-4924-b410-a00e87042a41 --l2p_dram_limit 10' 00:25:13.599 13:51:07 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:13.599 13:51:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:13.599 13:51:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:13.599 13:51:07 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:13.599 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:13.599 13:51:07 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a9a93bd4-0069-4924-b410-a00e87042a41 --l2p_dram_limit 10 -c nvc0n1p0 00:25:13.859 [2024-11-06 13:51:07.663915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.664004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:13.859 [2024-11-06 13:51:07.664054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:13.859 [2024-11-06 13:51:07.664071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.859 [2024-11-06 13:51:07.664223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.664246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.859 [2024-11-06 13:51:07.664270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:13.859 [2024-11-06 13:51:07.664287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.859 [2024-11-06 13:51:07.664332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:13.859 [2024-11-06 13:51:07.665660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:13.859 [2024-11-06 13:51:07.665714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.665734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.859 [2024-11-06 13:51:07.665757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:25:13.859 [2024-11-06 13:51:07.665773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.859 [2024-11-06 13:51:07.665956] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3613176e-faf5-4dd8-a731-104be21354e8 00:25:13.859 [2024-11-06 13:51:07.668781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.668843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:13.859 [2024-11-06 13:51:07.668864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:13.859 [2024-11-06 13:51:07.668885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.859 [2024-11-06 13:51:07.683780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.683837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.859 [2024-11-06 13:51:07.683861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.791 ms 00:25:13.859 [2024-11-06 13:51:07.683883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.859 [2024-11-06 13:51:07.684064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.684094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.859 [2024-11-06 13:51:07.684113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:13.859 [2024-11-06 13:51:07.684141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.859 [2024-11-06 13:51:07.684263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.684295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:13.859 [2024-11-06 13:51:07.684308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:13.859 [2024-11-06 13:51:07.684330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.859 [2024-11-06 13:51:07.684371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:13.859 [2024-11-06 13:51:07.691308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.859 [2024-11-06 13:51:07.691351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.859 [2024-11-06 13:51:07.691381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.943 ms 00:25:13.859 [2024-11-06 13:51:07.691397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.860 [2024-11-06 13:51:07.691453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.860 [2024-11-06 13:51:07.691473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:13.860 [2024-11-06 13:51:07.691495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:13.860 [2024-11-06 13:51:07.691511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.860 [2024-11-06 13:51:07.691570] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:13.860 [2024-11-06 13:51:07.691754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:13.860 [2024-11-06 13:51:07.691794] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:13.860 [2024-11-06 13:51:07.691819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:13.860 [2024-11-06 13:51:07.691848] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:13.860 [2024-11-06 13:51:07.691863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:13.860 [2024-11-06 13:51:07.691881] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:13.860 [2024-11-06 13:51:07.691901] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:13.860 [2024-11-06 13:51:07.691931] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:13.860 [2024-11-06 13:51:07.691944] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:13.860 [2024-11-06 13:51:07.691970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.860 [2024-11-06 13:51:07.691991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:13.860 [2024-11-06 13:51:07.692012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:25:13.860 [2024-11-06 13:51:07.692054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.860 [2024-11-06 13:51:07.692165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.860 [2024-11-06 13:51:07.692187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:13.860 [2024-11-06 13:51:07.692212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:13.860 [2024-11-06 13:51:07.692226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.860 [2024-11-06 13:51:07.692361] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:13.860 [2024-11-06 13:51:07.692385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:13.860 [2024-11-06 13:51:07.692407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.860 [2024-11-06 13:51:07.692427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:13.860 [2024-11-06 13:51:07.692462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:13.860 [2024-11-06 13:51:07.692501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:13.860 [2024-11-06 13:51:07.692525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.860 [2024-11-06 13:51:07.692566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:13.860 [2024-11-06 13:51:07.692580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:13.860 [2024-11-06 13:51:07.692598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.860 [2024-11-06 13:51:07.692613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:13.860 [2024-11-06 13:51:07.692634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:13.860 [2024-11-06 13:51:07.692647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:13.860 [2024-11-06 13:51:07.692683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:13.860 [2024-11-06 13:51:07.692709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:13.860 [2024-11-06 13:51:07.692751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.860 [2024-11-06 13:51:07.692788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:13.860 [2024-11-06 13:51:07.692806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.860 [2024-11-06 13:51:07.692845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:13.860 [2024-11-06 13:51:07.692864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.860 [2024-11-06 13:51:07.692893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:13.860 [2024-11-06 13:51:07.692906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.860 [2024-11-06 13:51:07.692943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:13.860 [2024-11-06 13:51:07.692972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:13.860 [2024-11-06 13:51:07.692990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.860 [2024-11-06 13:51:07.693012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:13.860 [2024-11-06 13:51:07.693059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:13.860 [2024-11-06 13:51:07.693084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.860 [2024-11-06 13:51:07.693100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:13.860 [2024-11-06 13:51:07.693122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:13.860 [2024-11-06 13:51:07.693136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.860 [2024-11-06 13:51:07.693156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:13.860 [2024-11-06 13:51:07.693174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:13.860 [2024-11-06 13:51:07.693194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.860 [2024-11-06 13:51:07.693211] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:13.860 [2024-11-06 13:51:07.693237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:13.860 [2024-11-06 13:51:07.693257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.860 [2024-11-06 13:51:07.693284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.860 [2024-11-06 13:51:07.693304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:13.860 [2024-11-06 13:51:07.693329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:13.860 [2024-11-06 13:51:07.693342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:13.860 [2024-11-06 13:51:07.693359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:13.860 [2024-11-06 13:51:07.693373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:13.860 [2024-11-06 13:51:07.693390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:13.860 [2024-11-06 13:51:07.693416] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:13.860 [2024-11-06 13:51:07.693447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.860 [2024-11-06 13:51:07.693475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:13.860 [2024-11-06 13:51:07.693500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:13.860 [2024-11-06 13:51:07.693519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:13.860 [2024-11-06 13:51:07.693543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:13.860 [2024-11-06 13:51:07.693562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:13.860 [2024-11-06 13:51:07.693585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:13.860 [2024-11-06 13:51:07.693602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:13.860 [2024-11-06 13:51:07.693623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:13.860 [2024-11-06 13:51:07.693642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:13.860 [2024-11-06 13:51:07.693669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:13.860 [2024-11-06 13:51:07.693689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:13.860 [2024-11-06 13:51:07.693713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:13.860 [2024-11-06 13:51:07.693733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:13.860 [2024-11-06 13:51:07.693761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:13.860 [2024-11-06 13:51:07.693780] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:13.860 [2024-11-06 13:51:07.693800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.860 [2024-11-06 13:51:07.693816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:13.860 [2024-11-06 13:51:07.693833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:13.860 [2024-11-06 13:51:07.693852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:13.860 [2024-11-06 13:51:07.693877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:13.860 [2024-11-06 13:51:07.693900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.860 [2024-11-06 13:51:07.693926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:13.861 [2024-11-06 13:51:07.693944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:25:13.861 [2024-11-06 13:51:07.693962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.861 [2024-11-06 13:51:07.694090] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:13.861 [2024-11-06 13:51:07.694128] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:18.054 [2024-11-06 13:51:11.231507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.231586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:18.054 [2024-11-06 13:51:11.231614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3537.396 ms 00:25:18.054 [2024-11-06 13:51:11.231635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.281379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.281447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:18.054 [2024-11-06 13:51:11.281475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.410 ms 00:25:18.054 [2024-11-06 13:51:11.281497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.281711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.281742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:18.054 [2024-11-06 13:51:11.281763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:18.054 [2024-11-06 13:51:11.281795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.337210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.337284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:18.054 [2024-11-06 13:51:11.337306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.313 ms 00:25:18.054 [2024-11-06 13:51:11.337328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.337393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.337423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:18.054 [2024-11-06 13:51:11.337440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:18.054 [2024-11-06 13:51:11.337463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.338583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.338837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:18.054 [2024-11-06 13:51:11.338871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:25:18.054 [2024-11-06 13:51:11.338897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.339114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.339146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:18.054 [2024-11-06 13:51:11.339175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:25:18.054 [2024-11-06 13:51:11.339201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.366189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.366235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:18.054 [2024-11-06 13:51:11.366259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.951 ms 00:25:18.054 [2024-11-06 13:51:11.366281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.394095] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:18.054 [2024-11-06 13:51:11.399733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.399769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:18.054 [2024-11-06 13:51:11.399796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.288 ms 00:25:18.054 [2024-11-06 13:51:11.399812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.493334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.493404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:18.054 [2024-11-06 13:51:11.493437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.468 ms 00:25:18.054 [2024-11-06 13:51:11.493455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.493727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.493756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:18.054 [2024-11-06 13:51:11.493783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:25:18.054 [2024-11-06 13:51:11.493799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.531170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.531211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:18.054 [2024-11-06 13:51:11.531240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.290 ms 00:25:18.054 [2024-11-06 13:51:11.531257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.567224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.567264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:18.054 [2024-11-06 13:51:11.567292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.901 ms 00:25:18.054 [2024-11-06 13:51:11.567308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.568267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.568303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:18.054 [2024-11-06 13:51:11.568329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:25:18.054 [2024-11-06 13:51:11.568350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.674670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.674725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:18.054 [2024-11-06 13:51:11.674760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.236 ms 00:25:18.054 [2024-11-06 13:51:11.674778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.714779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.714840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:18.054 [2024-11-06 13:51:11.714872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.883 ms 00:25:18.054 [2024-11-06 13:51:11.714888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.751455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.751495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:18.054 [2024-11-06 13:51:11.751522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.504 ms 00:25:18.054 [2024-11-06 13:51:11.751538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.788892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.788932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:18.054 [2024-11-06 13:51:11.788961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.289 ms 00:25:18.054 [2024-11-06 13:51:11.788976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.789060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.789083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:18.054 [2024-11-06 13:51:11.789112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:18.054 [2024-11-06 13:51:11.789147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.789301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.054 [2024-11-06 13:51:11.789322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:18.054 [2024-11-06 13:51:11.789351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:18.054 [2024-11-06 13:51:11.789369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.054 [2024-11-06 13:51:11.791110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4126.607 ms, result 0 00:25:18.054 { 00:25:18.054 "name": "ftl0", 00:25:18.054 "uuid": "3613176e-faf5-4dd8-a731-104be21354e8" 00:25:18.054 } 00:25:18.054 13:51:11 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:18.054 13:51:11 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:18.313 13:51:12 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:18.313 13:51:12 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:18.313 [2024-11-06 13:51:12.245767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.313 [2024-11-06 13:51:12.245989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:18.314 [2024-11-06 13:51:12.246190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:18.314 [2024-11-06 13:51:12.246245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.314 [2024-11-06 13:51:12.246311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:18.314 [2024-11-06 13:51:12.250959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.314 [2024-11-06 13:51:12.251003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:18.314 [2024-11-06 13:51:12.251041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.606 ms 00:25:18.314 [2024-11-06 13:51:12.251060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.314 [2024-11-06 13:51:12.251463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.314 [2024-11-06 13:51:12.251504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:18.314 [2024-11-06 13:51:12.251533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:25:18.314 [2024-11-06 13:51:12.251551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.314 [2024-11-06 13:51:12.254264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.314 [2024-11-06 13:51:12.254302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:18.314 [2024-11-06 13:51:12.254328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.673 ms 00:25:18.314 [2024-11-06 13:51:12.254345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.314 [2024-11-06 13:51:12.259760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.314 [2024-11-06 13:51:12.259799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:18.314 [2024-11-06 13:51:12.259830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.361 ms 00:25:18.314 [2024-11-06 13:51:12.259846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.298978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.574 [2024-11-06 13:51:12.299149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:18.574 [2024-11-06 13:51:12.299189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.053 ms 00:25:18.574 [2024-11-06 13:51:12.299201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.322752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.574 [2024-11-06 13:51:12.322906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:18.574 [2024-11-06 13:51:12.322942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.495 ms 00:25:18.574 [2024-11-06 13:51:12.322954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.323182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.574 [2024-11-06 13:51:12.323209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:18.574 [2024-11-06 13:51:12.323232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:25:18.574 [2024-11-06 13:51:12.323250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.360303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.574 [2024-11-06 13:51:12.360466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:18.574 [2024-11-06 13:51:12.360496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.012 ms 00:25:18.574 [2024-11-06 13:51:12.360507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.397050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.574 [2024-11-06 13:51:12.397088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:18.574 [2024-11-06 13:51:12.397115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.485 ms 00:25:18.574 [2024-11-06 13:51:12.397129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.433966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.574 [2024-11-06 13:51:12.434004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:18.574 [2024-11-06 13:51:12.434048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.777 ms 00:25:18.574 [2024-11-06 13:51:12.434065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.470154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.574 [2024-11-06 13:51:12.470191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:18.574 [2024-11-06 13:51:12.470219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.958 ms 00:25:18.574 [2024-11-06 13:51:12.470233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.574 [2024-11-06 13:51:12.470288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:18.574 [2024-11-06 13:51:12.470316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:18.574 [2024-11-06 13:51:12.470341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:18.574 [2024-11-06 13:51:12.470385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:18.574 [2024-11-06 13:51:12.470409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.470976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.471999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:18.575 [2024-11-06 13:51:12.472298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:18.576 [2024-11-06 13:51:12.472318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:18.576 [2024-11-06 13:51:12.472343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:18.576 [2024-11-06 13:51:12.472358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:18.576 [2024-11-06 13:51:12.472377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:18.576 [2024-11-06 13:51:12.472394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:18.576 [2024-11-06 13:51:12.472420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:18.576 [2024-11-06 13:51:12.472448] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:18.576 [2024-11-06 13:51:12.472476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3613176e-faf5-4dd8-a731-104be21354e8 00:25:18.576 [2024-11-06 13:51:12.472499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:18.576 [2024-11-06 13:51:12.472525] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:18.576 [2024-11-06 13:51:12.472543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:18.576 [2024-11-06 13:51:12.472573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:18.576 [2024-11-06 13:51:12.472606] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:18.576 [2024-11-06 13:51:12.472631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:18.576 [2024-11-06 13:51:12.472650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:18.576 [2024-11-06 13:51:12.472668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:18.576 [2024-11-06 13:51:12.472680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:18.576 [2024-11-06 13:51:12.472699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.576 [2024-11-06 13:51:12.472718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:18.576 [2024-11-06 13:51:12.472741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:25:18.576 [2024-11-06 13:51:12.472761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.576 [2024-11-06 13:51:12.494331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.576 [2024-11-06 13:51:12.494504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:18.576 [2024-11-06 13:51:12.494546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.479 ms 00:25:18.576 [2024-11-06 13:51:12.494561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.576 [2024-11-06 13:51:12.495206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.576 [2024-11-06 13:51:12.495244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:18.576 [2024-11-06 13:51:12.495277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:25:18.576 [2024-11-06 13:51:12.495296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.874 [2024-11-06 13:51:12.566565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.874 [2024-11-06 13:51:12.566604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:18.874 [2024-11-06 13:51:12.566632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.874 [2024-11-06 13:51:12.566649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.874 [2024-11-06 13:51:12.566754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.874 [2024-11-06 13:51:12.566775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:18.875 [2024-11-06 13:51:12.566803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.566820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.566974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.566996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:18.875 [2024-11-06 13:51:12.567041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.567060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.567105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.567124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:18.875 [2024-11-06 13:51:12.567145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.567160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.707806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.707889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:18.875 [2024-11-06 13:51:12.707922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.707938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.816483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.816563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:18.875 [2024-11-06 13:51:12.816595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.816618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.816826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.816849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:18.875 [2024-11-06 13:51:12.816873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.816890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.817013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.817073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:18.875 [2024-11-06 13:51:12.817099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.817117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.817308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.817331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:18.875 [2024-11-06 13:51:12.817354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.817370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.817442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.817464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:18.875 [2024-11-06 13:51:12.817486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.817503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.817587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.817603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:18.875 [2024-11-06 13:51:12.817618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.817631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.817724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.875 [2024-11-06 13:51:12.817742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:18.875 [2024-11-06 13:51:12.817757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.875 [2024-11-06 13:51:12.817769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.875 [2024-11-06 13:51:12.818022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 572.164 ms, result 0 00:25:18.875 true 00:25:18.875 13:51:12 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76784 00:25:18.875 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76784 ']' 00:25:18.875 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76784 00:25:18.875 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:25:18.875 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:18.875 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76784 00:25:19.133 killing process with pid 76784 00:25:19.134 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:19.134 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:19.134 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76784' 00:25:19.134 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 76784 00:25:19.134 13:51:12 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 76784 00:25:24.405 13:51:18 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:29.684 262144+0 records in 00:25:29.684 262144+0 records out 00:25:29.684 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.53567 s, 237 MB/s 00:25:29.684 13:51:22 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:30.624 13:51:24 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.624 [2024-11-06 13:51:24.497070] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:25:30.624 [2024-11-06 13:51:24.497226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77048 ] 00:25:30.883 [2024-11-06 13:51:24.693383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.143 [2024-11-06 13:51:24.871860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.402 [2024-11-06 13:51:25.320534] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.402 [2024-11-06 13:51:25.320621] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.664 [2024-11-06 13:51:25.511848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.511949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:31.664 [2024-11-06 13:51:25.511976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:31.664 [2024-11-06 13:51:25.511993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.512107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.512139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.664 [2024-11-06 13:51:25.512157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:31.664 [2024-11-06 13:51:25.512173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.512210] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:31.664 [2024-11-06 13:51:25.513824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:31.664 [2024-11-06 13:51:25.513865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.513884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.664 [2024-11-06 13:51:25.513902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:25:31.664 [2024-11-06 13:51:25.513918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.517147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:31.664 [2024-11-06 13:51:25.551951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.552050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:31.664 [2024-11-06 13:51:25.552079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.802 ms 00:25:31.664 [2024-11-06 13:51:25.552098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.552248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.552271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:31.664 [2024-11-06 13:51:25.552291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:31.664 [2024-11-06 13:51:25.552308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.567609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.567682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.664 [2024-11-06 13:51:25.567706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.156 ms 00:25:31.664 [2024-11-06 13:51:25.567744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.567939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.567963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.664 [2024-11-06 13:51:25.567982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:25:31.664 [2024-11-06 13:51:25.567999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.568145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.568167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:31.664 [2024-11-06 13:51:25.568186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:31.664 [2024-11-06 13:51:25.568202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.568260] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:31.664 [2024-11-06 13:51:25.577498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.577762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.664 [2024-11-06 13:51:25.577945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.248 ms 00:25:31.664 [2024-11-06 13:51:25.578007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.578226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.578302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:31.664 [2024-11-06 13:51:25.578437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:31.664 [2024-11-06 13:51:25.578551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.578800] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:31.664 [2024-11-06 13:51:25.578970] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:31.664 [2024-11-06 13:51:25.579242] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:31.664 [2024-11-06 13:51:25.579497] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:31.664 [2024-11-06 13:51:25.579661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:31.664 [2024-11-06 13:51:25.579685] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:31.664 [2024-11-06 13:51:25.579708] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:31.664 [2024-11-06 13:51:25.579730] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:31.664 [2024-11-06 13:51:25.579751] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:31.664 [2024-11-06 13:51:25.579769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:31.664 [2024-11-06 13:51:25.579786] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:31.664 [2024-11-06 13:51:25.579802] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:31.664 [2024-11-06 13:51:25.579835] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:31.664 [2024-11-06 13:51:25.579853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.579871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:31.664 [2024-11-06 13:51:25.579888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:25:31.664 [2024-11-06 13:51:25.579905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.580060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.664 [2024-11-06 13:51:25.580081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:31.664 [2024-11-06 13:51:25.580098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:25:31.664 [2024-11-06 13:51:25.580115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.664 [2024-11-06 13:51:25.580288] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:31.664 [2024-11-06 13:51:25.580313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:31.664 [2024-11-06 13:51:25.580330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.664 [2024-11-06 13:51:25.580348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.664 [2024-11-06 13:51:25.580365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:31.664 [2024-11-06 13:51:25.580381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:31.664 [2024-11-06 13:51:25.580397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:31.664 [2024-11-06 13:51:25.580413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:31.664 [2024-11-06 13:51:25.580429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:31.664 [2024-11-06 13:51:25.580444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.664 [2024-11-06 13:51:25.580459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:31.664 [2024-11-06 13:51:25.580474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:31.664 [2024-11-06 13:51:25.580489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.664 [2024-11-06 13:51:25.580504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:31.664 [2024-11-06 13:51:25.580520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:31.664 [2024-11-06 13:51:25.580557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.664 [2024-11-06 13:51:25.580573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:31.664 [2024-11-06 13:51:25.580588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:31.664 [2024-11-06 13:51:25.580605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.664 [2024-11-06 13:51:25.580621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:31.664 [2024-11-06 13:51:25.580637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:31.664 [2024-11-06 13:51:25.580652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.664 [2024-11-06 13:51:25.580668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:31.664 [2024-11-06 13:51:25.580683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:31.665 [2024-11-06 13:51:25.580698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.665 [2024-11-06 13:51:25.580714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:31.665 [2024-11-06 13:51:25.580730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:31.665 [2024-11-06 13:51:25.580747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.665 [2024-11-06 13:51:25.580763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:31.665 [2024-11-06 13:51:25.580778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:31.665 [2024-11-06 13:51:25.580792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.665 [2024-11-06 13:51:25.580808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:31.665 [2024-11-06 13:51:25.580823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:31.665 [2024-11-06 13:51:25.580838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.665 [2024-11-06 13:51:25.580854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:31.665 [2024-11-06 13:51:25.580869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:31.665 [2024-11-06 13:51:25.580885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.665 [2024-11-06 13:51:25.580899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:31.665 [2024-11-06 13:51:25.580915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:31.665 [2024-11-06 13:51:25.580931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.665 [2024-11-06 13:51:25.580946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:31.665 [2024-11-06 13:51:25.580961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:31.665 [2024-11-06 13:51:25.580976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.665 [2024-11-06 13:51:25.580992] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:31.665 [2024-11-06 13:51:25.581010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:31.665 [2024-11-06 13:51:25.581043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.665 [2024-11-06 13:51:25.581060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.665 [2024-11-06 13:51:25.581078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:31.665 [2024-11-06 13:51:25.581094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:31.665 [2024-11-06 13:51:25.581109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:31.665 [2024-11-06 13:51:25.581125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:31.665 [2024-11-06 13:51:25.581140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:31.665 [2024-11-06 13:51:25.581155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:31.665 [2024-11-06 13:51:25.581174] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:31.665 [2024-11-06 13:51:25.581195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.665 [2024-11-06 13:51:25.581225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:31.665 [2024-11-06 13:51:25.581244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:31.665 [2024-11-06 13:51:25.581261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:31.665 [2024-11-06 13:51:25.581279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:31.665 [2024-11-06 13:51:25.581296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:31.665 [2024-11-06 13:51:25.581314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:31.665 [2024-11-06 13:51:25.581331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:31.665 [2024-11-06 13:51:25.581348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:31.665 [2024-11-06 13:51:25.581365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:31.665 [2024-11-06 13:51:25.581383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:31.665 [2024-11-06 13:51:25.581399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:31.665 [2024-11-06 13:51:25.581415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:31.665 [2024-11-06 13:51:25.581432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:31.665 [2024-11-06 13:51:25.581449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:31.665 [2024-11-06 13:51:25.581466] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:31.665 [2024-11-06 13:51:25.581485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.665 [2024-11-06 13:51:25.581503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.665 [2024-11-06 13:51:25.581519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:31.665 [2024-11-06 13:51:25.581536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:31.665 [2024-11-06 13:51:25.581553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:31.665 [2024-11-06 13:51:25.581570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.665 [2024-11-06 13:51:25.581587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:31.665 [2024-11-06 13:51:25.581604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:25:31.665 [2024-11-06 13:51:25.581620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.650268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.650627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.925 [2024-11-06 13:51:25.650754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.560 ms 00:25:31.925 [2024-11-06 13:51:25.650896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.651121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.651242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:31.925 [2024-11-06 13:51:25.651347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:25:31.925 [2024-11-06 13:51:25.651445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.725837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.726164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.925 [2024-11-06 13:51:25.726250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.163 ms 00:25:31.925 [2024-11-06 13:51:25.726289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.726405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.729746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.925 [2024-11-06 13:51:25.729784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:31.925 [2024-11-06 13:51:25.729796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.730726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.730756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.925 [2024-11-06 13:51:25.730769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:25:31.925 [2024-11-06 13:51:25.730780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.730933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.730949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.925 [2024-11-06 13:51:25.730961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:25:31.925 [2024-11-06 13:51:25.730980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.755896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.755944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.925 [2024-11-06 13:51:25.755965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.878 ms 00:25:31.925 [2024-11-06 13:51:25.755976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.925 [2024-11-06 13:51:25.777283] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:31.925 [2024-11-06 13:51:25.777332] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:31.925 [2024-11-06 13:51:25.777351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.925 [2024-11-06 13:51:25.777363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:31.925 [2024-11-06 13:51:25.777377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.177 ms 00:25:31.926 [2024-11-06 13:51:25.777388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.926 [2024-11-06 13:51:25.808450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.926 [2024-11-06 13:51:25.808512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:31.926 [2024-11-06 13:51:25.808529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.004 ms 00:25:31.926 [2024-11-06 13:51:25.808540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.926 [2024-11-06 13:51:25.828204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.926 [2024-11-06 13:51:25.828268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:31.926 [2024-11-06 13:51:25.828283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.608 ms 00:25:31.926 [2024-11-06 13:51:25.828294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.926 [2024-11-06 13:51:25.846060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.926 [2024-11-06 13:51:25.846306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:31.926 [2024-11-06 13:51:25.846328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.720 ms 00:25:31.926 [2024-11-06 13:51:25.846339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.926 [2024-11-06 13:51:25.847235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.926 [2024-11-06 13:51:25.847262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:31.926 [2024-11-06 13:51:25.847277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:25:31.926 [2024-11-06 13:51:25.847287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.185 [2024-11-06 13:51:25.946679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.185 [2024-11-06 13:51:25.946770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:32.185 [2024-11-06 13:51:25.946790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.360 ms 00:25:32.185 [2024-11-06 13:51:25.946815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.185 [2024-11-06 13:51:25.958778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:32.185 [2024-11-06 13:51:25.964342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.185 [2024-11-06 13:51:25.964373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:32.185 [2024-11-06 13:51:25.964389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.434 ms 00:25:32.185 [2024-11-06 13:51:25.964400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.185 [2024-11-06 13:51:25.964537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.186 [2024-11-06 13:51:25.964552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:32.186 [2024-11-06 13:51:25.964565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:32.186 [2024-11-06 13:51:25.964575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.186 [2024-11-06 13:51:25.964674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.186 [2024-11-06 13:51:25.964687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:32.186 [2024-11-06 13:51:25.964699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:32.186 [2024-11-06 13:51:25.964709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.186 [2024-11-06 13:51:25.964736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.186 [2024-11-06 13:51:25.964748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:32.186 [2024-11-06 13:51:25.964758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:32.186 [2024-11-06 13:51:25.964768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.186 [2024-11-06 13:51:25.964814] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:32.186 [2024-11-06 13:51:25.964827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.186 [2024-11-06 13:51:25.964845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:32.186 [2024-11-06 13:51:25.964855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:32.186 [2024-11-06 13:51:25.964866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.186 [2024-11-06 13:51:26.003769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.186 [2024-11-06 13:51:26.003926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:32.186 [2024-11-06 13:51:26.004045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.881 ms 00:25:32.186 [2024-11-06 13:51:26.004089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.186 [2024-11-06 13:51:26.004210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.186 [2024-11-06 13:51:26.004394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:32.186 [2024-11-06 13:51:26.004433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:32.186 [2024-11-06 13:51:26.004466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.186 [2024-11-06 13:51:26.006103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 493.678 ms, result 0 00:25:33.124  [2024-11-06T13:51:28.043Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-06T13:51:29.420Z] Copying: 61/1024 [MB] (31 MBps) [2024-11-06T13:51:30.367Z] Copying: 92/1024 [MB] (30 MBps) [2024-11-06T13:51:31.302Z] Copying: 123/1024 [MB] (30 MBps) [2024-11-06T13:51:32.233Z] Copying: 153/1024 [MB] (29 MBps) [2024-11-06T13:51:33.168Z] Copying: 184/1024 [MB] (31 MBps) [2024-11-06T13:51:34.101Z] Copying: 215/1024 [MB] (31 MBps) [2024-11-06T13:51:35.090Z] Copying: 247/1024 [MB] (31 MBps) [2024-11-06T13:51:36.024Z] Copying: 279/1024 [MB] (32 MBps) [2024-11-06T13:51:37.400Z] Copying: 309/1024 [MB] (29 MBps) [2024-11-06T13:51:38.336Z] Copying: 339/1024 [MB] (30 MBps) [2024-11-06T13:51:39.272Z] Copying: 369/1024 [MB] (30 MBps) [2024-11-06T13:51:40.209Z] Copying: 398/1024 [MB] (28 MBps) [2024-11-06T13:51:41.148Z] Copying: 429/1024 [MB] (31 MBps) [2024-11-06T13:51:42.084Z] Copying: 460/1024 [MB] (31 MBps) [2024-11-06T13:51:43.462Z] Copying: 490/1024 [MB] (29 MBps) [2024-11-06T13:51:44.030Z] Copying: 519/1024 [MB] (29 MBps) [2024-11-06T13:51:45.407Z] Copying: 549/1024 [MB] (29 MBps) [2024-11-06T13:51:46.343Z] Copying: 577/1024 [MB] (28 MBps) [2024-11-06T13:51:47.337Z] Copying: 605/1024 [MB] (27 MBps) [2024-11-06T13:51:48.274Z] Copying: 633/1024 [MB] (28 MBps) [2024-11-06T13:51:49.210Z] Copying: 663/1024 [MB] (29 MBps) [2024-11-06T13:51:50.147Z] Copying: 693/1024 [MB] (30 MBps) [2024-11-06T13:51:51.083Z] Copying: 723/1024 [MB] (30 MBps) [2024-11-06T13:51:52.019Z] Copying: 753/1024 [MB] (29 MBps) [2024-11-06T13:51:53.396Z] Copying: 781/1024 [MB] (28 MBps) [2024-11-06T13:51:54.334Z] Copying: 810/1024 [MB] (29 MBps) [2024-11-06T13:51:55.270Z] Copying: 837/1024 [MB] (27 MBps) [2024-11-06T13:51:56.202Z] Copying: 865/1024 [MB] (27 MBps) [2024-11-06T13:51:57.144Z] Copying: 892/1024 [MB] (27 MBps) [2024-11-06T13:51:58.081Z] Copying: 922/1024 [MB] (29 MBps) [2024-11-06T13:51:59.458Z] Copying: 951/1024 [MB] (29 MBps) [2024-11-06T13:52:00.025Z] Copying: 980/1024 [MB] (29 MBps) [2024-11-06T13:52:00.593Z] Copying: 1010/1024 [MB] (29 MBps) [2024-11-06T13:52:00.593Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-06 13:52:00.474667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.610 [2024-11-06 13:52:00.474725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:06.610 [2024-11-06 13:52:00.474744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:06.610 [2024-11-06 13:52:00.474757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.610 [2024-11-06 13:52:00.474782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:06.610 [2024-11-06 13:52:00.479035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.610 [2024-11-06 13:52:00.479069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:06.610 [2024-11-06 13:52:00.479082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.224 ms 00:26:06.610 [2024-11-06 13:52:00.479106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.610 [2024-11-06 13:52:00.480721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.610 [2024-11-06 13:52:00.480759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:06.610 [2024-11-06 13:52:00.480772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:26:06.610 [2024-11-06 13:52:00.480782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.610 [2024-11-06 13:52:00.496182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.610 [2024-11-06 13:52:00.496220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:06.610 [2024-11-06 13:52:00.496233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.382 ms 00:26:06.610 [2024-11-06 13:52:00.496243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.610 [2024-11-06 13:52:00.501347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.611 [2024-11-06 13:52:00.501516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:06.611 [2024-11-06 13:52:00.501537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:26:06.611 [2024-11-06 13:52:00.501548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.611 [2024-11-06 13:52:00.539214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.611 [2024-11-06 13:52:00.539252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:06.611 [2024-11-06 13:52:00.539265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.604 ms 00:26:06.611 [2024-11-06 13:52:00.539275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.611 [2024-11-06 13:52:00.561035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.611 [2024-11-06 13:52:00.561086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:06.611 [2024-11-06 13:52:00.561101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.722 ms 00:26:06.611 [2024-11-06 13:52:00.561112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.611 [2024-11-06 13:52:00.561229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.611 [2024-11-06 13:52:00.561243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:06.611 [2024-11-06 13:52:00.561268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:06.611 [2024-11-06 13:52:00.561278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.871 [2024-11-06 13:52:00.598443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.871 [2024-11-06 13:52:00.598480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:06.871 [2024-11-06 13:52:00.598494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.149 ms 00:26:06.871 [2024-11-06 13:52:00.598504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.871 [2024-11-06 13:52:00.634744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.871 [2024-11-06 13:52:00.634780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:06.871 [2024-11-06 13:52:00.634805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.202 ms 00:26:06.871 [2024-11-06 13:52:00.634815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.871 [2024-11-06 13:52:00.670670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.871 [2024-11-06 13:52:00.670705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:06.871 [2024-11-06 13:52:00.670718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.816 ms 00:26:06.871 [2024-11-06 13:52:00.670727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.871 [2024-11-06 13:52:00.707330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.871 [2024-11-06 13:52:00.707381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:06.871 [2024-11-06 13:52:00.707395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.529 ms 00:26:06.871 [2024-11-06 13:52:00.707405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.871 [2024-11-06 13:52:00.707441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:06.871 [2024-11-06 13:52:00.707458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:06.871 [2024-11-06 13:52:00.707471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:06.871 [2024-11-06 13:52:00.707483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.707993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:06.872 [2024-11-06 13:52:00.708522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:06.873 [2024-11-06 13:52:00.708624] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:06.873 [2024-11-06 13:52:00.708640] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3613176e-faf5-4dd8-a731-104be21354e8 00:26:06.873 [2024-11-06 13:52:00.708654] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:06.873 [2024-11-06 13:52:00.708664] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:06.873 [2024-11-06 13:52:00.708674] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:06.873 [2024-11-06 13:52:00.708685] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:06.873 [2024-11-06 13:52:00.708694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:06.873 [2024-11-06 13:52:00.708704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:06.873 [2024-11-06 13:52:00.708715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:06.873 [2024-11-06 13:52:00.708734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:06.873 [2024-11-06 13:52:00.708743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:06.873 [2024-11-06 13:52:00.708753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.873 [2024-11-06 13:52:00.708764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:06.873 [2024-11-06 13:52:00.708774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:26:06.873 [2024-11-06 13:52:00.708787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.873 [2024-11-06 13:52:00.729134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.873 [2024-11-06 13:52:00.729168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:06.873 [2024-11-06 13:52:00.729180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.311 ms 00:26:06.873 [2024-11-06 13:52:00.729190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.873 [2024-11-06 13:52:00.729741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.873 [2024-11-06 13:52:00.729756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:06.873 [2024-11-06 13:52:00.729768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:26:06.873 [2024-11-06 13:52:00.729778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.873 [2024-11-06 13:52:00.782749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.873 [2024-11-06 13:52:00.782785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:06.873 [2024-11-06 13:52:00.782798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.873 [2024-11-06 13:52:00.782809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.873 [2024-11-06 13:52:00.782870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.873 [2024-11-06 13:52:00.782881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:06.873 [2024-11-06 13:52:00.782892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.873 [2024-11-06 13:52:00.782902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.873 [2024-11-06 13:52:00.782970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.873 [2024-11-06 13:52:00.782984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:06.873 [2024-11-06 13:52:00.782995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.873 [2024-11-06 13:52:00.783004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.873 [2024-11-06 13:52:00.783037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.873 [2024-11-06 13:52:00.783049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:06.873 [2024-11-06 13:52:00.783059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.873 [2024-11-06 13:52:00.783069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:00.911447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:00.911712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:07.133 [2024-11-06 13:52:00.911736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:00.911747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.015935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:01.015985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:07.133 [2024-11-06 13:52:01.016001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:01.016011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.016134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:01.016147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:07.133 [2024-11-06 13:52:01.016159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:01.016169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.016213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:01.016225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:07.133 [2024-11-06 13:52:01.016235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:01.016245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.016356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:01.016374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:07.133 [2024-11-06 13:52:01.016385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:01.016395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.016431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:01.016443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:07.133 [2024-11-06 13:52:01.016453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:01.016463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.016503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:01.016518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:07.133 [2024-11-06 13:52:01.016529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:01.016539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.016581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.133 [2024-11-06 13:52:01.016593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:07.133 [2024-11-06 13:52:01.016603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.133 [2024-11-06 13:52:01.016613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.133 [2024-11-06 13:52:01.016735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.029 ms, result 0 00:26:09.066 00:26:09.066 00:26:09.066 13:52:02 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:09.066 [2024-11-06 13:52:02.702625] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:26:09.066 [2024-11-06 13:52:02.702815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77429 ] 00:26:09.066 [2024-11-06 13:52:02.891121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.066 [2024-11-06 13:52:03.004934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.635 [2024-11-06 13:52:03.373404] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.635 [2024-11-06 13:52:03.373475] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.635 [2024-11-06 13:52:03.534265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.635 [2024-11-06 13:52:03.534318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:09.635 [2024-11-06 13:52:03.534340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:09.635 [2024-11-06 13:52:03.534350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.635 [2024-11-06 13:52:03.534421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.635 [2024-11-06 13:52:03.534435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:09.635 [2024-11-06 13:52:03.534449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:09.635 [2024-11-06 13:52:03.534459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.635 [2024-11-06 13:52:03.534481] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:09.635 [2024-11-06 13:52:03.535571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:09.635 [2024-11-06 13:52:03.535607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.635 [2024-11-06 13:52:03.535618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:09.635 [2024-11-06 13:52:03.535630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:26:09.635 [2024-11-06 13:52:03.535639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.635 [2024-11-06 13:52:03.537070] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:09.636 [2024-11-06 13:52:03.556641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.556679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:09.636 [2024-11-06 13:52:03.556693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.571 ms 00:26:09.636 [2024-11-06 13:52:03.556704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.556775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.556788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:09.636 [2024-11-06 13:52:03.556800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:09.636 [2024-11-06 13:52:03.556810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.563562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.563594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:09.636 [2024-11-06 13:52:03.563607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.679 ms 00:26:09.636 [2024-11-06 13:52:03.563621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.563701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.563716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:09.636 [2024-11-06 13:52:03.563727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:09.636 [2024-11-06 13:52:03.563737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.563780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.563792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:09.636 [2024-11-06 13:52:03.563802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:09.636 [2024-11-06 13:52:03.563812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.563842] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:09.636 [2024-11-06 13:52:03.568710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.568875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:09.636 [2024-11-06 13:52:03.568896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.878 ms 00:26:09.636 [2024-11-06 13:52:03.568913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.568948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.568960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:09.636 [2024-11-06 13:52:03.568970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:09.636 [2024-11-06 13:52:03.568980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.569059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:09.636 [2024-11-06 13:52:03.569084] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:09.636 [2024-11-06 13:52:03.569120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:09.636 [2024-11-06 13:52:03.569142] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:09.636 [2024-11-06 13:52:03.569233] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:09.636 [2024-11-06 13:52:03.569247] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:09.636 [2024-11-06 13:52:03.569260] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:09.636 [2024-11-06 13:52:03.569274] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569287] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:09.636 [2024-11-06 13:52:03.569308] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:09.636 [2024-11-06 13:52:03.569318] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:09.636 [2024-11-06 13:52:03.569331] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:09.636 [2024-11-06 13:52:03.569342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.569352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:09.636 [2024-11-06 13:52:03.569363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:26:09.636 [2024-11-06 13:52:03.569373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.569449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.636 [2024-11-06 13:52:03.569460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:09.636 [2024-11-06 13:52:03.569471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:09.636 [2024-11-06 13:52:03.569480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.636 [2024-11-06 13:52:03.569577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:09.636 [2024-11-06 13:52:03.569592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:09.636 [2024-11-06 13:52:03.569604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:09.636 [2024-11-06 13:52:03.569634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:09.636 [2024-11-06 13:52:03.569664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:09.636 [2024-11-06 13:52:03.569683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:09.636 [2024-11-06 13:52:03.569692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:09.636 [2024-11-06 13:52:03.569706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:09.636 [2024-11-06 13:52:03.569716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:09.636 [2024-11-06 13:52:03.569726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:09.636 [2024-11-06 13:52:03.569744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:09.636 [2024-11-06 13:52:03.569764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:09.636 [2024-11-06 13:52:03.569793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:09.636 [2024-11-06 13:52:03.569821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:09.636 [2024-11-06 13:52:03.569849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:09.636 [2024-11-06 13:52:03.569876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.636 [2024-11-06 13:52:03.569895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:09.636 [2024-11-06 13:52:03.569904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:09.636 [2024-11-06 13:52:03.569923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:09.636 [2024-11-06 13:52:03.569932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:09.636 [2024-11-06 13:52:03.569941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:09.636 [2024-11-06 13:52:03.569950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:09.636 [2024-11-06 13:52:03.569960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:09.636 [2024-11-06 13:52:03.569969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.636 [2024-11-06 13:52:03.569978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:09.636 [2024-11-06 13:52:03.569987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:09.636 [2024-11-06 13:52:03.569995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.636 [2024-11-06 13:52:03.570005] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:09.636 [2024-11-06 13:52:03.570029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:09.636 [2024-11-06 13:52:03.570040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:09.636 [2024-11-06 13:52:03.570050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.636 [2024-11-06 13:52:03.570060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:09.636 [2024-11-06 13:52:03.570070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:09.636 [2024-11-06 13:52:03.570080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:09.636 [2024-11-06 13:52:03.570089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:09.636 [2024-11-06 13:52:03.570098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:09.636 [2024-11-06 13:52:03.570108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:09.636 [2024-11-06 13:52:03.570118] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:09.636 [2024-11-06 13:52:03.570131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.637 [2024-11-06 13:52:03.570142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:09.637 [2024-11-06 13:52:03.570152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:09.637 [2024-11-06 13:52:03.570162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:09.637 [2024-11-06 13:52:03.570172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:09.637 [2024-11-06 13:52:03.570183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:09.637 [2024-11-06 13:52:03.570194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:09.637 [2024-11-06 13:52:03.570204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:09.637 [2024-11-06 13:52:03.570214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:09.637 [2024-11-06 13:52:03.570225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:09.637 [2024-11-06 13:52:03.570236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:09.637 [2024-11-06 13:52:03.570246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:09.637 [2024-11-06 13:52:03.570256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:09.637 [2024-11-06 13:52:03.570267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:09.637 [2024-11-06 13:52:03.570277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:09.637 [2024-11-06 13:52:03.570287] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:09.637 [2024-11-06 13:52:03.570302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.637 [2024-11-06 13:52:03.570313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:09.637 [2024-11-06 13:52:03.570324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:09.637 [2024-11-06 13:52:03.570334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:09.637 [2024-11-06 13:52:03.570344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:09.637 [2024-11-06 13:52:03.570363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.637 [2024-11-06 13:52:03.570377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:09.637 [2024-11-06 13:52:03.570387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:26:09.637 [2024-11-06 13:52:03.570397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.637 [2024-11-06 13:52:03.610327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.637 [2024-11-06 13:52:03.610518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:09.637 [2024-11-06 13:52:03.610541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.881 ms 00:26:09.637 [2024-11-06 13:52:03.610552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.637 [2024-11-06 13:52:03.610650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.637 [2024-11-06 13:52:03.610662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:09.637 [2024-11-06 13:52:03.610673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:09.637 [2024-11-06 13:52:03.610683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.896 [2024-11-06 13:52:03.672738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.896 [2024-11-06 13:52:03.672778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:09.896 [2024-11-06 13:52:03.672792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.986 ms 00:26:09.896 [2024-11-06 13:52:03.672803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.896 [2024-11-06 13:52:03.672852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.896 [2024-11-06 13:52:03.672863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:09.896 [2024-11-06 13:52:03.672878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:09.896 [2024-11-06 13:52:03.672888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.896 [2024-11-06 13:52:03.673386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.896 [2024-11-06 13:52:03.673401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:09.896 [2024-11-06 13:52:03.673412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:26:09.896 [2024-11-06 13:52:03.673422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.896 [2024-11-06 13:52:03.673540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.896 [2024-11-06 13:52:03.673554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:09.896 [2024-11-06 13:52:03.673565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:09.896 [2024-11-06 13:52:03.673581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.896 [2024-11-06 13:52:03.693185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.896 [2024-11-06 13:52:03.693221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:09.896 [2024-11-06 13:52:03.693239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.584 ms 00:26:09.896 [2024-11-06 13:52:03.693249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.896 [2024-11-06 13:52:03.712142] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:09.896 [2024-11-06 13:52:03.712179] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:09.896 [2024-11-06 13:52:03.712194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.896 [2024-11-06 13:52:03.712206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:09.896 [2024-11-06 13:52:03.712217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.835 ms 00:26:09.896 [2024-11-06 13:52:03.712227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.896 [2024-11-06 13:52:03.742533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.896 [2024-11-06 13:52:03.742570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:09.897 [2024-11-06 13:52:03.742584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.263 ms 00:26:09.897 [2024-11-06 13:52:03.742595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.897 [2024-11-06 13:52:03.761430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.897 [2024-11-06 13:52:03.761467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:09.897 [2024-11-06 13:52:03.761480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.795 ms 00:26:09.897 [2024-11-06 13:52:03.761490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.897 [2024-11-06 13:52:03.780213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.897 [2024-11-06 13:52:03.780368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:09.897 [2024-11-06 13:52:03.780388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.683 ms 00:26:09.897 [2024-11-06 13:52:03.780399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.897 [2024-11-06 13:52:03.781213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.897 [2024-11-06 13:52:03.781238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:09.897 [2024-11-06 13:52:03.781250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:26:09.897 [2024-11-06 13:52:03.781263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.897 [2024-11-06 13:52:03.868189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.897 [2024-11-06 13:52:03.868253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:09.897 [2024-11-06 13:52:03.868292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.902 ms 00:26:09.897 [2024-11-06 13:52:03.868303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.156 [2024-11-06 13:52:03.879232] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:10.156 [2024-11-06 13:52:03.882370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.156 [2024-11-06 13:52:03.882401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:10.156 [2024-11-06 13:52:03.882416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.009 ms 00:26:10.156 [2024-11-06 13:52:03.882426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.156 [2024-11-06 13:52:03.882529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.156 [2024-11-06 13:52:03.882543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:10.156 [2024-11-06 13:52:03.882554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:10.156 [2024-11-06 13:52:03.882568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.156 [2024-11-06 13:52:03.882658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.156 [2024-11-06 13:52:03.882672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:10.156 [2024-11-06 13:52:03.882682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:10.156 [2024-11-06 13:52:03.882692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.156 [2024-11-06 13:52:03.882717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.156 [2024-11-06 13:52:03.882729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:10.156 [2024-11-06 13:52:03.882739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:10.156 [2024-11-06 13:52:03.882749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.156 [2024-11-06 13:52:03.882784] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:10.156 [2024-11-06 13:52:03.882796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.156 [2024-11-06 13:52:03.882806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:10.156 [2024-11-06 13:52:03.882817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:10.156 [2024-11-06 13:52:03.882827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.156 [2024-11-06 13:52:03.920391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.156 [2024-11-06 13:52:03.920429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:10.156 [2024-11-06 13:52:03.920443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.541 ms 00:26:10.156 [2024-11-06 13:52:03.920459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.156 [2024-11-06 13:52:03.920531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.156 [2024-11-06 13:52:03.920544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:10.156 [2024-11-06 13:52:03.920556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:10.157 [2024-11-06 13:52:03.920566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.157 [2024-11-06 13:52:03.921652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.928 ms, result 0 00:26:11.535  [2024-11-06T13:52:06.454Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-06T13:52:07.390Z] Copying: 62/1024 [MB] (31 MBps) [2024-11-06T13:52:08.326Z] Copying: 94/1024 [MB] (32 MBps) [2024-11-06T13:52:09.263Z] Copying: 126/1024 [MB] (31 MBps) [2024-11-06T13:52:10.197Z] Copying: 157/1024 [MB] (31 MBps) [2024-11-06T13:52:11.575Z] Copying: 188/1024 [MB] (31 MBps) [2024-11-06T13:52:12.512Z] Copying: 219/1024 [MB] (30 MBps) [2024-11-06T13:52:13.448Z] Copying: 250/1024 [MB] (31 MBps) [2024-11-06T13:52:14.383Z] Copying: 281/1024 [MB] (31 MBps) [2024-11-06T13:52:15.330Z] Copying: 312/1024 [MB] (30 MBps) [2024-11-06T13:52:16.289Z] Copying: 342/1024 [MB] (29 MBps) [2024-11-06T13:52:17.226Z] Copying: 372/1024 [MB] (30 MBps) [2024-11-06T13:52:18.163Z] Copying: 404/1024 [MB] (32 MBps) [2024-11-06T13:52:19.539Z] Copying: 436/1024 [MB] (31 MBps) [2024-11-06T13:52:20.480Z] Copying: 466/1024 [MB] (29 MBps) [2024-11-06T13:52:21.417Z] Copying: 498/1024 [MB] (31 MBps) [2024-11-06T13:52:22.353Z] Copying: 529/1024 [MB] (31 MBps) [2024-11-06T13:52:23.291Z] Copying: 560/1024 [MB] (31 MBps) [2024-11-06T13:52:24.226Z] Copying: 591/1024 [MB] (30 MBps) [2024-11-06T13:52:25.162Z] Copying: 622/1024 [MB] (30 MBps) [2024-11-06T13:52:26.539Z] Copying: 652/1024 [MB] (30 MBps) [2024-11-06T13:52:27.476Z] Copying: 683/1024 [MB] (30 MBps) [2024-11-06T13:52:28.412Z] Copying: 712/1024 [MB] (28 MBps) [2024-11-06T13:52:29.348Z] Copying: 741/1024 [MB] (29 MBps) [2024-11-06T13:52:30.284Z] Copying: 771/1024 [MB] (30 MBps) [2024-11-06T13:52:31.220Z] Copying: 801/1024 [MB] (29 MBps) [2024-11-06T13:52:32.154Z] Copying: 830/1024 [MB] (29 MBps) [2024-11-06T13:52:33.532Z] Copying: 860/1024 [MB] (29 MBps) [2024-11-06T13:52:34.468Z] Copying: 890/1024 [MB] (30 MBps) [2024-11-06T13:52:35.404Z] Copying: 921/1024 [MB] (30 MBps) [2024-11-06T13:52:36.339Z] Copying: 952/1024 [MB] (30 MBps) [2024-11-06T13:52:37.272Z] Copying: 982/1024 [MB] (30 MBps) [2024-11-06T13:52:37.531Z] Copying: 1013/1024 [MB] (30 MBps) [2024-11-06T13:52:38.466Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-06 13:52:38.427561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.483 [2024-11-06 13:52:38.427682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:44.483 [2024-11-06 13:52:38.427721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:44.483 [2024-11-06 13:52:38.427750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.483 [2024-11-06 13:52:38.427806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:44.483 [2024-11-06 13:52:38.433615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.483 [2024-11-06 13:52:38.433654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:44.483 [2024-11-06 13:52:38.433717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.769 ms 00:26:44.483 [2024-11-06 13:52:38.433727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.483 [2024-11-06 13:52:38.433934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.483 [2024-11-06 13:52:38.433947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:44.483 [2024-11-06 13:52:38.433959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:26:44.483 [2024-11-06 13:52:38.433969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.483 [2024-11-06 13:52:38.436679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.483 [2024-11-06 13:52:38.436701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:44.483 [2024-11-06 13:52:38.436711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:26:44.483 [2024-11-06 13:52:38.436736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.483 [2024-11-06 13:52:38.442510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.483 [2024-11-06 13:52:38.442542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:44.483 [2024-11-06 13:52:38.442554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.745 ms 00:26:44.483 [2024-11-06 13:52:38.442564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.480184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.742 [2024-11-06 13:52:38.480222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:44.742 [2024-11-06 13:52:38.480236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.543 ms 00:26:44.742 [2024-11-06 13:52:38.480262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.501028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.742 [2024-11-06 13:52:38.501067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:44.742 [2024-11-06 13:52:38.501082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.726 ms 00:26:44.742 [2024-11-06 13:52:38.501092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.501231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.742 [2024-11-06 13:52:38.501256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:44.742 [2024-11-06 13:52:38.501267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:44.742 [2024-11-06 13:52:38.501277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.536949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.742 [2024-11-06 13:52:38.537144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:44.742 [2024-11-06 13:52:38.537168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.656 ms 00:26:44.742 [2024-11-06 13:52:38.537179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.572792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.742 [2024-11-06 13:52:38.572842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:44.742 [2024-11-06 13:52:38.572855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.575 ms 00:26:44.742 [2024-11-06 13:52:38.572879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.609489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.742 [2024-11-06 13:52:38.609527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:44.742 [2024-11-06 13:52:38.609541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.572 ms 00:26:44.742 [2024-11-06 13:52:38.609550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.645654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.742 [2024-11-06 13:52:38.645690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:44.742 [2024-11-06 13:52:38.645703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.021 ms 00:26:44.742 [2024-11-06 13:52:38.645713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.742 [2024-11-06 13:52:38.645750] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:44.742 [2024-11-06 13:52:38.645768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.645997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:44.742 [2024-11-06 13:52:38.646254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:44.743 [2024-11-06 13:52:38.646885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:44.743 [2024-11-06 13:52:38.646901] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3613176e-faf5-4dd8-a731-104be21354e8 00:26:44.743 [2024-11-06 13:52:38.646912] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:44.743 [2024-11-06 13:52:38.646922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:44.743 [2024-11-06 13:52:38.646932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:44.743 [2024-11-06 13:52:38.646942] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:44.743 [2024-11-06 13:52:38.646952] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:44.743 [2024-11-06 13:52:38.646962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:44.743 [2024-11-06 13:52:38.646985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:44.743 [2024-11-06 13:52:38.646995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:44.743 [2024-11-06 13:52:38.647004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:44.743 [2024-11-06 13:52:38.647014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.743 [2024-11-06 13:52:38.647034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:44.743 [2024-11-06 13:52:38.647045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:26:44.743 [2024-11-06 13:52:38.647054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.743 [2024-11-06 13:52:38.667451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.743 [2024-11-06 13:52:38.667485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:44.743 [2024-11-06 13:52:38.667498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.338 ms 00:26:44.743 [2024-11-06 13:52:38.667508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.743 [2024-11-06 13:52:38.668095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.743 [2024-11-06 13:52:38.668115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:44.743 [2024-11-06 13:52:38.668126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:26:44.743 [2024-11-06 13:52:38.668146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.743 [2024-11-06 13:52:38.720283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.743 [2024-11-06 13:52:38.720448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.743 [2024-11-06 13:52:38.720469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.743 [2024-11-06 13:52:38.720482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.743 [2024-11-06 13:52:38.720537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.743 [2024-11-06 13:52:38.720549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.743 [2024-11-06 13:52:38.720560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.743 [2024-11-06 13:52:38.720575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.743 [2024-11-06 13:52:38.720642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.743 [2024-11-06 13:52:38.720657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.743 [2024-11-06 13:52:38.720667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.743 [2024-11-06 13:52:38.720677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.743 [2024-11-06 13:52:38.720695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.743 [2024-11-06 13:52:38.720706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.743 [2024-11-06 13:52:38.720716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.743 [2024-11-06 13:52:38.720726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.845501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.845755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.002 [2024-11-06 13:52:38.845778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.845790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.946613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.946681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.002 [2024-11-06 13:52:38.946695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.946712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.946804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.946817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:45.002 [2024-11-06 13:52:38.946827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.946837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.946884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.946896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:45.002 [2024-11-06 13:52:38.946906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.946916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.947022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.947056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:45.002 [2024-11-06 13:52:38.947067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.947076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.947113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.947125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:45.002 [2024-11-06 13:52:38.947135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.947145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.947187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.947198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:45.002 [2024-11-06 13:52:38.947208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.947217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.947257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.002 [2024-11-06 13:52:38.947269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:45.002 [2024-11-06 13:52:38.947279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.002 [2024-11-06 13:52:38.947288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.002 [2024-11-06 13:52:38.947400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.834 ms, result 0 00:26:46.377 00:26:46.377 00:26:46.377 13:52:40 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:48.279 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:48.279 13:52:41 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:48.279 [2024-11-06 13:52:41.879302] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:26:48.279 [2024-11-06 13:52:41.879658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77829 ] 00:26:48.279 [2024-11-06 13:52:42.048683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.279 [2024-11-06 13:52:42.162072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.847 [2024-11-06 13:52:42.528540] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:48.847 [2024-11-06 13:52:42.528603] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:48.847 [2024-11-06 13:52:42.690699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.847 [2024-11-06 13:52:42.690752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:48.847 [2024-11-06 13:52:42.690774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:48.847 [2024-11-06 13:52:42.690785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.847 [2024-11-06 13:52:42.690832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.847 [2024-11-06 13:52:42.690844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:48.847 [2024-11-06 13:52:42.690858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:48.847 [2024-11-06 13:52:42.690868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.847 [2024-11-06 13:52:42.690889] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:48.847 [2024-11-06 13:52:42.691935] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:48.847 [2024-11-06 13:52:42.691971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.847 [2024-11-06 13:52:42.691982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:48.847 [2024-11-06 13:52:42.691992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:26:48.847 [2024-11-06 13:52:42.692002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.847 [2024-11-06 13:52:42.693453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:48.847 [2024-11-06 13:52:42.712979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.847 [2024-11-06 13:52:42.713049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:48.847 [2024-11-06 13:52:42.713064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.526 ms 00:26:48.847 [2024-11-06 13:52:42.713074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.847 [2024-11-06 13:52:42.713140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.847 [2024-11-06 13:52:42.713153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:48.847 [2024-11-06 13:52:42.713164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:48.847 [2024-11-06 13:52:42.713174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.847 [2024-11-06 13:52:42.719899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.847 [2024-11-06 13:52:42.720146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:48.847 [2024-11-06 13:52:42.720182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:26:48.847 [2024-11-06 13:52:42.720206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.847 [2024-11-06 13:52:42.720311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.848 [2024-11-06 13:52:42.720334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:48.848 [2024-11-06 13:52:42.720346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:48.848 [2024-11-06 13:52:42.720356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.848 [2024-11-06 13:52:42.720403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.848 [2024-11-06 13:52:42.720415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:48.848 [2024-11-06 13:52:42.720427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:48.848 [2024-11-06 13:52:42.720436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.848 [2024-11-06 13:52:42.720466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:48.848 [2024-11-06 13:52:42.726022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.848 [2024-11-06 13:52:42.726073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:48.848 [2024-11-06 13:52:42.726089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.564 ms 00:26:48.848 [2024-11-06 13:52:42.726106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.848 [2024-11-06 13:52:42.726141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.848 [2024-11-06 13:52:42.726155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:48.848 [2024-11-06 13:52:42.726169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:48.848 [2024-11-06 13:52:42.726181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.848 [2024-11-06 13:52:42.726242] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:48.848 [2024-11-06 13:52:42.726268] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:48.848 [2024-11-06 13:52:42.726326] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:48.848 [2024-11-06 13:52:42.726350] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:48.848 [2024-11-06 13:52:42.726455] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:48.848 [2024-11-06 13:52:42.726472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:48.848 [2024-11-06 13:52:42.726488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:48.848 [2024-11-06 13:52:42.726504] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:48.848 [2024-11-06 13:52:42.726519] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:48.848 [2024-11-06 13:52:42.726533] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:48.848 [2024-11-06 13:52:42.726546] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:48.848 [2024-11-06 13:52:42.726558] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:48.848 [2024-11-06 13:52:42.726575] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:48.848 [2024-11-06 13:52:42.726588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.848 [2024-11-06 13:52:42.726601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:48.848 [2024-11-06 13:52:42.726615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:26:48.848 [2024-11-06 13:52:42.726628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.848 [2024-11-06 13:52:42.726709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.848 [2024-11-06 13:52:42.726722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:48.848 [2024-11-06 13:52:42.726736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:48.848 [2024-11-06 13:52:42.726748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.848 [2024-11-06 13:52:42.726851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:48.848 [2024-11-06 13:52:42.726869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:48.848 [2024-11-06 13:52:42.726882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:48.848 [2024-11-06 13:52:42.726896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.848 [2024-11-06 13:52:42.726909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:48.848 [2024-11-06 13:52:42.726921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:48.848 [2024-11-06 13:52:42.726933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:48.848 [2024-11-06 13:52:42.726946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:48.848 [2024-11-06 13:52:42.726958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:48.848 [2024-11-06 13:52:42.726970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:48.848 [2024-11-06 13:52:42.727001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:48.848 [2024-11-06 13:52:42.727015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:48.848 [2024-11-06 13:52:42.727048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:48.848 [2024-11-06 13:52:42.727061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:48.848 [2024-11-06 13:52:42.727071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:48.848 [2024-11-06 13:52:42.727091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:48.848 [2024-11-06 13:52:42.727112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:48.848 [2024-11-06 13:52:42.727122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:48.848 [2024-11-06 13:52:42.727143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.848 [2024-11-06 13:52:42.727163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:48.848 [2024-11-06 13:52:42.727173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.848 [2024-11-06 13:52:42.727192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:48.848 [2024-11-06 13:52:42.727202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.848 [2024-11-06 13:52:42.727222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:48.848 [2024-11-06 13:52:42.727231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:48.848 [2024-11-06 13:52:42.727251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:48.848 [2024-11-06 13:52:42.727262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:48.848 [2024-11-06 13:52:42.727281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:48.848 [2024-11-06 13:52:42.727291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:48.848 [2024-11-06 13:52:42.727301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:48.848 [2024-11-06 13:52:42.727311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:48.848 [2024-11-06 13:52:42.727321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:48.848 [2024-11-06 13:52:42.727331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:48.848 [2024-11-06 13:52:42.727350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:48.848 [2024-11-06 13:52:42.727361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727371] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:48.848 [2024-11-06 13:52:42.727381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:48.848 [2024-11-06 13:52:42.727392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:48.848 [2024-11-06 13:52:42.727402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:48.848 [2024-11-06 13:52:42.727413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:48.848 [2024-11-06 13:52:42.727423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:48.848 [2024-11-06 13:52:42.727433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:48.848 [2024-11-06 13:52:42.727444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:48.848 [2024-11-06 13:52:42.727453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:48.848 [2024-11-06 13:52:42.727463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:48.848 [2024-11-06 13:52:42.727475] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:48.848 [2024-11-06 13:52:42.727488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.848 [2024-11-06 13:52:42.727501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:48.848 [2024-11-06 13:52:42.727512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:48.848 [2024-11-06 13:52:42.727523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:48.848 [2024-11-06 13:52:42.727535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:48.848 [2024-11-06 13:52:42.727546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:48.848 [2024-11-06 13:52:42.727557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:48.848 [2024-11-06 13:52:42.727568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:48.848 [2024-11-06 13:52:42.727579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:48.848 [2024-11-06 13:52:42.727590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:48.848 [2024-11-06 13:52:42.727601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:48.849 [2024-11-06 13:52:42.727612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:48.849 [2024-11-06 13:52:42.727623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:48.849 [2024-11-06 13:52:42.727635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:48.849 [2024-11-06 13:52:42.727646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:48.849 [2024-11-06 13:52:42.727657] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:48.849 [2024-11-06 13:52:42.727672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.849 [2024-11-06 13:52:42.727684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:48.849 [2024-11-06 13:52:42.727696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:48.849 [2024-11-06 13:52:42.727707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:48.849 [2024-11-06 13:52:42.727719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:48.849 [2024-11-06 13:52:42.727742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.849 [2024-11-06 13:52:42.727752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:48.849 [2024-11-06 13:52:42.727763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:26:48.849 [2024-11-06 13:52:42.727772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.849 [2024-11-06 13:52:42.769890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.849 [2024-11-06 13:52:42.769937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:48.849 [2024-11-06 13:52:42.769953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.068 ms 00:26:48.849 [2024-11-06 13:52:42.769964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.849 [2024-11-06 13:52:42.770083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.849 [2024-11-06 13:52:42.770096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:48.849 [2024-11-06 13:52:42.770107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:48.849 [2024-11-06 13:52:42.770117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.849 [2024-11-06 13:52:42.827846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.849 [2024-11-06 13:52:42.827908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:48.849 [2024-11-06 13:52:42.827924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.651 ms 00:26:48.849 [2024-11-06 13:52:42.827935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.849 [2024-11-06 13:52:42.827991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.849 [2024-11-06 13:52:42.828002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:48.849 [2024-11-06 13:52:42.828033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:48.849 [2024-11-06 13:52:42.828045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.108 [2024-11-06 13:52:42.828537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.108 [2024-11-06 13:52:42.828558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.108 [2024-11-06 13:52:42.828570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:26:49.108 [2024-11-06 13:52:42.828580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.108 [2024-11-06 13:52:42.828699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.108 [2024-11-06 13:52:42.828713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.109 [2024-11-06 13:52:42.828723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:49.109 [2024-11-06 13:52:42.828740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:42.849216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:42.849257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.109 [2024-11-06 13:52:42.849275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.454 ms 00:26:49.109 [2024-11-06 13:52:42.849287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:42.868719] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:49.109 [2024-11-06 13:52:42.868866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:49.109 [2024-11-06 13:52:42.868888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:42.868899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:49.109 [2024-11-06 13:52:42.868926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.479 ms 00:26:49.109 [2024-11-06 13:52:42.868935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:42.899147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:42.899303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:49.109 [2024-11-06 13:52:42.899326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.127 ms 00:26:49.109 [2024-11-06 13:52:42.899337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:42.918458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:42.918506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:49.109 [2024-11-06 13:52:42.918520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.039 ms 00:26:49.109 [2024-11-06 13:52:42.918530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:42.937150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:42.937301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:49.109 [2024-11-06 13:52:42.937321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.579 ms 00:26:49.109 [2024-11-06 13:52:42.937331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:42.938232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:42.938257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:49.109 [2024-11-06 13:52:42.938270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:26:49.109 [2024-11-06 13:52:42.938283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.026692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.026759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:49.109 [2024-11-06 13:52:43.026781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.385 ms 00:26:49.109 [2024-11-06 13:52:43.026792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.038098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:49.109 [2024-11-06 13:52:43.041050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.041082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:49.109 [2024-11-06 13:52:43.041096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.195 ms 00:26:49.109 [2024-11-06 13:52:43.041107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.041202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.041216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:49.109 [2024-11-06 13:52:43.041227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:49.109 [2024-11-06 13:52:43.041240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.041327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.041340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:49.109 [2024-11-06 13:52:43.041351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:49.109 [2024-11-06 13:52:43.041361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.041386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.041398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:49.109 [2024-11-06 13:52:43.041408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:49.109 [2024-11-06 13:52:43.041418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.041452] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:49.109 [2024-11-06 13:52:43.041464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.041475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:49.109 [2024-11-06 13:52:43.041485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:49.109 [2024-11-06 13:52:43.041495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.079078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.079235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:49.109 [2024-11-06 13:52:43.079372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.558 ms 00:26:49.109 [2024-11-06 13:52:43.079419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.079514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.109 [2024-11-06 13:52:43.079656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:49.109 [2024-11-06 13:52:43.079730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:49.109 [2024-11-06 13:52:43.079762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.109 [2024-11-06 13:52:43.080970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.768 ms, result 0 00:26:50.484  [2024-11-06T13:52:45.403Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-06T13:52:46.339Z] Copying: 60/1024 [MB] (29 MBps) [2024-11-06T13:52:47.274Z] Copying: 89/1024 [MB] (28 MBps) [2024-11-06T13:52:48.209Z] Copying: 118/1024 [MB] (29 MBps) [2024-11-06T13:52:49.145Z] Copying: 148/1024 [MB] (29 MBps) [2024-11-06T13:52:50.116Z] Copying: 178/1024 [MB] (29 MBps) [2024-11-06T13:52:51.493Z] Copying: 208/1024 [MB] (30 MBps) [2024-11-06T13:52:52.427Z] Copying: 238/1024 [MB] (30 MBps) [2024-11-06T13:52:53.363Z] Copying: 268/1024 [MB] (29 MBps) [2024-11-06T13:52:54.301Z] Copying: 297/1024 [MB] (29 MBps) [2024-11-06T13:52:55.236Z] Copying: 329/1024 [MB] (31 MBps) [2024-11-06T13:52:56.170Z] Copying: 359/1024 [MB] (30 MBps) [2024-11-06T13:52:57.106Z] Copying: 392/1024 [MB] (32 MBps) [2024-11-06T13:52:58.482Z] Copying: 424/1024 [MB] (32 MBps) [2024-11-06T13:52:59.418Z] Copying: 455/1024 [MB] (30 MBps) [2024-11-06T13:53:00.350Z] Copying: 486/1024 [MB] (31 MBps) [2024-11-06T13:53:01.283Z] Copying: 518/1024 [MB] (32 MBps) [2024-11-06T13:53:02.218Z] Copying: 549/1024 [MB] (30 MBps) [2024-11-06T13:53:03.154Z] Copying: 581/1024 [MB] (31 MBps) [2024-11-06T13:53:04.531Z] Copying: 613/1024 [MB] (32 MBps) [2024-11-06T13:53:05.099Z] Copying: 643/1024 [MB] (29 MBps) [2024-11-06T13:53:06.476Z] Copying: 672/1024 [MB] (28 MBps) [2024-11-06T13:53:07.412Z] Copying: 700/1024 [MB] (27 MBps) [2024-11-06T13:53:08.347Z] Copying: 728/1024 [MB] (28 MBps) [2024-11-06T13:53:09.287Z] Copying: 756/1024 [MB] (28 MBps) [2024-11-06T13:53:10.222Z] Copying: 784/1024 [MB] (27 MBps) [2024-11-06T13:53:11.159Z] Copying: 812/1024 [MB] (27 MBps) [2024-11-06T13:53:12.146Z] Copying: 840/1024 [MB] (27 MBps) [2024-11-06T13:53:13.122Z] Copying: 868/1024 [MB] (27 MBps) [2024-11-06T13:53:14.499Z] Copying: 896/1024 [MB] (28 MBps) [2024-11-06T13:53:15.434Z] Copying: 926/1024 [MB] (30 MBps) [2024-11-06T13:53:16.369Z] Copying: 956/1024 [MB] (30 MBps) [2024-11-06T13:53:17.305Z] Copying: 988/1024 [MB] (31 MBps) [2024-11-06T13:53:18.242Z] Copying: 1020/1024 [MB] (32 MBps) [2024-11-06T13:53:18.242Z] Copying: 1048556/1048576 [kB] (3980 kBps) [2024-11-06T13:53:18.242Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-06 13:53:18.128127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.259 [2024-11-06 13:53:18.128192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:24.259 [2024-11-06 13:53:18.128209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:24.259 [2024-11-06 13:53:18.128230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.259 [2024-11-06 13:53:18.130831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:24.259 [2024-11-06 13:53:18.136933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.259 [2024-11-06 13:53:18.136967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:24.259 [2024-11-06 13:53:18.136980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.055 ms 00:27:24.259 [2024-11-06 13:53:18.137006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.259 [2024-11-06 13:53:18.147631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.259 [2024-11-06 13:53:18.147672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:24.259 [2024-11-06 13:53:18.147687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.945 ms 00:27:24.259 [2024-11-06 13:53:18.147708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.259 [2024-11-06 13:53:18.168227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.259 [2024-11-06 13:53:18.168264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:24.259 [2024-11-06 13:53:18.168280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.500 ms 00:27:24.259 [2024-11-06 13:53:18.168293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.259 [2024-11-06 13:53:18.173767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.259 [2024-11-06 13:53:18.173800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:24.259 [2024-11-06 13:53:18.173813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.437 ms 00:27:24.259 [2024-11-06 13:53:18.173823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.259 [2024-11-06 13:53:18.211847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.259 [2024-11-06 13:53:18.212028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:24.259 [2024-11-06 13:53:18.212051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.959 ms 00:27:24.259 [2024-11-06 13:53:18.212061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.259 [2024-11-06 13:53:18.233641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.259 [2024-11-06 13:53:18.233686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:24.259 [2024-11-06 13:53:18.233701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.511 ms 00:27:24.259 [2024-11-06 13:53:18.233712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.519 [2024-11-06 13:53:18.340448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.519 [2024-11-06 13:53:18.340627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:24.519 [2024-11-06 13:53:18.340654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.689 ms 00:27:24.519 [2024-11-06 13:53:18.340668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.519 [2024-11-06 13:53:18.379020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.519 [2024-11-06 13:53:18.379071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:24.519 [2024-11-06 13:53:18.379086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.324 ms 00:27:24.519 [2024-11-06 13:53:18.379096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.519 [2024-11-06 13:53:18.416490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.519 [2024-11-06 13:53:18.416660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:24.519 [2024-11-06 13:53:18.416682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.352 ms 00:27:24.519 [2024-11-06 13:53:18.416692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.519 [2024-11-06 13:53:18.452970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.519 [2024-11-06 13:53:18.453015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:24.519 [2024-11-06 13:53:18.453045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.238 ms 00:27:24.519 [2024-11-06 13:53:18.453056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.519 [2024-11-06 13:53:18.489676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.519 [2024-11-06 13:53:18.489720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:24.519 [2024-11-06 13:53:18.489737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.539 ms 00:27:24.519 [2024-11-06 13:53:18.489750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.519 [2024-11-06 13:53:18.489794] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:24.520 [2024-11-06 13:53:18.489814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117248 / 261120 wr_cnt: 1 state: open 00:27:24.520 [2024-11-06 13:53:18.489830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.489997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.490990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.491006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.491021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.491045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:24.520 [2024-11-06 13:53:18.491061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:24.521 [2024-11-06 13:53:18.491389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:24.521 [2024-11-06 13:53:18.491403] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3613176e-faf5-4dd8-a731-104be21354e8 00:27:24.521 [2024-11-06 13:53:18.491418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117248 00:27:24.521 [2024-11-06 13:53:18.491433] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118208 00:27:24.521 [2024-11-06 13:53:18.491446] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117248 00:27:24.521 [2024-11-06 13:53:18.491461] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:27:24.521 [2024-11-06 13:53:18.491475] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:24.521 [2024-11-06 13:53:18.491495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:24.521 [2024-11-06 13:53:18.491533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:24.521 [2024-11-06 13:53:18.491547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:24.521 [2024-11-06 13:53:18.491558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:24.521 [2024-11-06 13:53:18.491569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.521 [2024-11-06 13:53:18.491580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:24.521 [2024-11-06 13:53:18.491593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:27:24.521 [2024-11-06 13:53:18.491607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.780 [2024-11-06 13:53:18.511190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.780 [2024-11-06 13:53:18.511359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:24.780 [2024-11-06 13:53:18.511382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.539 ms 00:27:24.780 [2024-11-06 13:53:18.511402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.780 [2024-11-06 13:53:18.511967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.780 [2024-11-06 13:53:18.511984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:24.780 [2024-11-06 13:53:18.511995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:27:24.780 [2024-11-06 13:53:18.512005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.780 [2024-11-06 13:53:18.564718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.780 [2024-11-06 13:53:18.564769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:24.780 [2024-11-06 13:53:18.564784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.780 [2024-11-06 13:53:18.564795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.780 [2024-11-06 13:53:18.564862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.780 [2024-11-06 13:53:18.564873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:24.780 [2024-11-06 13:53:18.564883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.780 [2024-11-06 13:53:18.564893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.780 [2024-11-06 13:53:18.564965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.780 [2024-11-06 13:53:18.564978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:24.780 [2024-11-06 13:53:18.564993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.780 [2024-11-06 13:53:18.565003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.781 [2024-11-06 13:53:18.565037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.781 [2024-11-06 13:53:18.565049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:24.781 [2024-11-06 13:53:18.565060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.781 [2024-11-06 13:53:18.565070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.781 [2024-11-06 13:53:18.692983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.781 [2024-11-06 13:53:18.693061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:24.781 [2024-11-06 13:53:18.693085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.781 [2024-11-06 13:53:18.693096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.796909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.040 [2024-11-06 13:53:18.797121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.040 [2024-11-06 13:53:18.797146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.040 [2024-11-06 13:53:18.797158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.797259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.040 [2024-11-06 13:53:18.797271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.040 [2024-11-06 13:53:18.797282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.040 [2024-11-06 13:53:18.797296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.797346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.040 [2024-11-06 13:53:18.797357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.040 [2024-11-06 13:53:18.797368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.040 [2024-11-06 13:53:18.797378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.797484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.040 [2024-11-06 13:53:18.797497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.040 [2024-11-06 13:53:18.797508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.040 [2024-11-06 13:53:18.797518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.797557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.040 [2024-11-06 13:53:18.797569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.040 [2024-11-06 13:53:18.797579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.040 [2024-11-06 13:53:18.797590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.797626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.040 [2024-11-06 13:53:18.797637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.040 [2024-11-06 13:53:18.797647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.040 [2024-11-06 13:53:18.797657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.797702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.040 [2024-11-06 13:53:18.797714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.040 [2024-11-06 13:53:18.797725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.040 [2024-11-06 13:53:18.797734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.040 [2024-11-06 13:53:18.797849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 671.477 ms, result 0 00:27:26.417 00:27:26.417 00:27:26.417 13:53:20 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:26.676 [2024-11-06 13:53:20.405649] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:27:26.676 [2024-11-06 13:53:20.405827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78209 ] 00:27:26.676 [2024-11-06 13:53:20.592328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.934 [2024-11-06 13:53:20.768397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.193 [2024-11-06 13:53:21.144463] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:27.193 [2024-11-06 13:53:21.144537] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:27.452 [2024-11-06 13:53:21.305630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.452 [2024-11-06 13:53:21.305835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:27.452 [2024-11-06 13:53:21.305942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:27.452 [2024-11-06 13:53:21.305980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.452 [2024-11-06 13:53:21.306072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.452 [2024-11-06 13:53:21.306112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:27.452 [2024-11-06 13:53:21.306211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:27.452 [2024-11-06 13:53:21.306247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.452 [2024-11-06 13:53:21.306282] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:27.452 [2024-11-06 13:53:21.307385] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:27.452 [2024-11-06 13:53:21.307423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.452 [2024-11-06 13:53:21.307435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:27.452 [2024-11-06 13:53:21.307448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:27:27.452 [2024-11-06 13:53:21.307458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.452 [2024-11-06 13:53:21.308917] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:27.452 [2024-11-06 13:53:21.328137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.452 [2024-11-06 13:53:21.328185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:27.452 [2024-11-06 13:53:21.328215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.221 ms 00:27:27.452 [2024-11-06 13:53:21.328225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.452 [2024-11-06 13:53:21.328292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.328305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:27.453 [2024-11-06 13:53:21.328317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:27.453 [2024-11-06 13:53:21.328327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.335000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.335035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:27.453 [2024-11-06 13:53:21.335047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.587 ms 00:27:27.453 [2024-11-06 13:53:21.335061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.335141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.335155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:27.453 [2024-11-06 13:53:21.335165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:27.453 [2024-11-06 13:53:21.335176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.335218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.335230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:27.453 [2024-11-06 13:53:21.335240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:27.453 [2024-11-06 13:53:21.335250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.335280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:27.453 [2024-11-06 13:53:21.340203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.340331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:27.453 [2024-11-06 13:53:21.340472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.934 ms 00:27:27.453 [2024-11-06 13:53:21.340516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.340573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.340605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:27.453 [2024-11-06 13:53:21.340637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:27.453 [2024-11-06 13:53:21.340721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.340808] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:27.453 [2024-11-06 13:53:21.340868] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:27.453 [2024-11-06 13:53:21.340999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:27.453 [2024-11-06 13:53:21.341088] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:27.453 [2024-11-06 13:53:21.341221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:27.453 [2024-11-06 13:53:21.341294] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:27.453 [2024-11-06 13:53:21.341410] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:27.453 [2024-11-06 13:53:21.341467] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:27.453 [2024-11-06 13:53:21.341518] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:27.453 [2024-11-06 13:53:21.341615] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:27.453 [2024-11-06 13:53:21.341651] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:27.453 [2024-11-06 13:53:21.341683] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:27.453 [2024-11-06 13:53:21.341718] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:27.453 [2024-11-06 13:53:21.341813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.341828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:27.453 [2024-11-06 13:53:21.341840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:27:27.453 [2024-11-06 13:53:21.341851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.341935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.453 [2024-11-06 13:53:21.341947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:27.453 [2024-11-06 13:53:21.341957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:27.453 [2024-11-06 13:53:21.341968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.453 [2024-11-06 13:53:21.342080] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:27.453 [2024-11-06 13:53:21.342096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:27.453 [2024-11-06 13:53:21.342107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:27.453 [2024-11-06 13:53:21.342138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:27.453 [2024-11-06 13:53:21.342166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.453 [2024-11-06 13:53:21.342185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:27.453 [2024-11-06 13:53:21.342194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:27.453 [2024-11-06 13:53:21.342203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.453 [2024-11-06 13:53:21.342212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:27.453 [2024-11-06 13:53:21.342222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:27.453 [2024-11-06 13:53:21.342241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:27.453 [2024-11-06 13:53:21.342260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:27.453 [2024-11-06 13:53:21.342287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:27.453 [2024-11-06 13:53:21.342316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:27.453 [2024-11-06 13:53:21.342343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:27.453 [2024-11-06 13:53:21.342370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:27.453 [2024-11-06 13:53:21.342407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.453 [2024-11-06 13:53:21.342426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:27.453 [2024-11-06 13:53:21.342436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:27.453 [2024-11-06 13:53:21.342445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.453 [2024-11-06 13:53:21.342454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:27.453 [2024-11-06 13:53:21.342463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:27.453 [2024-11-06 13:53:21.342472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:27.453 [2024-11-06 13:53:21.342490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:27.453 [2024-11-06 13:53:21.342499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342508] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:27.453 [2024-11-06 13:53:21.342518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:27.453 [2024-11-06 13:53:21.342528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.453 [2024-11-06 13:53:21.342538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.453 [2024-11-06 13:53:21.342548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:27.453 [2024-11-06 13:53:21.342557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:27.453 [2024-11-06 13:53:21.342566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:27.453 [2024-11-06 13:53:21.342575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:27.454 [2024-11-06 13:53:21.342584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:27.454 [2024-11-06 13:53:21.342593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:27.454 [2024-11-06 13:53:21.342604] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:27.454 [2024-11-06 13:53:21.342617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.454 [2024-11-06 13:53:21.342629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:27.454 [2024-11-06 13:53:21.342639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:27.454 [2024-11-06 13:53:21.342650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:27.454 [2024-11-06 13:53:21.342661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:27.454 [2024-11-06 13:53:21.342671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:27.454 [2024-11-06 13:53:21.342682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:27.454 [2024-11-06 13:53:21.342693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:27.454 [2024-11-06 13:53:21.342704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:27.454 [2024-11-06 13:53:21.342715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:27.454 [2024-11-06 13:53:21.342725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:27.454 [2024-11-06 13:53:21.342735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:27.454 [2024-11-06 13:53:21.342745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:27.454 [2024-11-06 13:53:21.342756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:27.454 [2024-11-06 13:53:21.342766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:27.454 [2024-11-06 13:53:21.342776] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:27.454 [2024-11-06 13:53:21.342790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.454 [2024-11-06 13:53:21.342802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:27.454 [2024-11-06 13:53:21.342812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:27.454 [2024-11-06 13:53:21.342822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:27.454 [2024-11-06 13:53:21.342832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:27.454 [2024-11-06 13:53:21.342843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.454 [2024-11-06 13:53:21.342853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:27.454 [2024-11-06 13:53:21.342863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:27:27.454 [2024-11-06 13:53:21.342874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.454 [2024-11-06 13:53:21.384960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.454 [2024-11-06 13:53:21.385177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:27.454 [2024-11-06 13:53:21.385267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.036 ms 00:27:27.454 [2024-11-06 13:53:21.385308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.454 [2024-11-06 13:53:21.385452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.454 [2024-11-06 13:53:21.385532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:27.454 [2024-11-06 13:53:21.385569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:27.454 [2024-11-06 13:53:21.385600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.713 [2024-11-06 13:53:21.444408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.713 [2024-11-06 13:53:21.444587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:27.713 [2024-11-06 13:53:21.444727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.672 ms 00:27:27.713 [2024-11-06 13:53:21.444767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.713 [2024-11-06 13:53:21.444841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.713 [2024-11-06 13:53:21.444875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:27.713 [2024-11-06 13:53:21.444913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:27.713 [2024-11-06 13:53:21.444997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.713 [2024-11-06 13:53:21.445546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.445661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:27.714 [2024-11-06 13:53:21.445741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:27:27.714 [2024-11-06 13:53:21.445778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.445927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.445970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:27.714 [2024-11-06 13:53:21.446056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:27.714 [2024-11-06 13:53:21.446101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.465978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.466125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:27.714 [2024-11-06 13:53:21.466241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.827 ms 00:27:27.714 [2024-11-06 13:53:21.466279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.486368] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:27.714 [2024-11-06 13:53:21.486615] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:27.714 [2024-11-06 13:53:21.486810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.486848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:27.714 [2024-11-06 13:53:21.486886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.384 ms 00:27:27.714 [2024-11-06 13:53:21.486919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.518379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.518569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:27.714 [2024-11-06 13:53:21.518667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.286 ms 00:27:27.714 [2024-11-06 13:53:21.518683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.537651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.537703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:27.714 [2024-11-06 13:53:21.537718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.874 ms 00:27:27.714 [2024-11-06 13:53:21.537738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.557135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.557199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:27.714 [2024-11-06 13:53:21.557215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.337 ms 00:27:27.714 [2024-11-06 13:53:21.557225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.558158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.558184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:27.714 [2024-11-06 13:53:21.558196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:27:27.714 [2024-11-06 13:53:21.558211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.648978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.649055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:27.714 [2024-11-06 13:53:21.649078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.740 ms 00:27:27.714 [2024-11-06 13:53:21.649089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.660306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:27.714 [2024-11-06 13:53:21.663367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.663506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:27.714 [2024-11-06 13:53:21.663531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.208 ms 00:27:27.714 [2024-11-06 13:53:21.663541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.663647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.663661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:27.714 [2024-11-06 13:53:21.663673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:27.714 [2024-11-06 13:53:21.663686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.665267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.665304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:27.714 [2024-11-06 13:53:21.665317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.518 ms 00:27:27.714 [2024-11-06 13:53:21.665327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.665366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.665378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:27.714 [2024-11-06 13:53:21.665389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:27.714 [2024-11-06 13:53:21.665398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.714 [2024-11-06 13:53:21.665438] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:27.714 [2024-11-06 13:53:21.665451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.714 [2024-11-06 13:53:21.665461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:27.714 [2024-11-06 13:53:21.665472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:27.714 [2024-11-06 13:53:21.665482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.973 [2024-11-06 13:53:21.705767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.973 [2024-11-06 13:53:21.705813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:27.973 [2024-11-06 13:53:21.705828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.264 ms 00:27:27.973 [2024-11-06 13:53:21.705844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.973 [2024-11-06 13:53:21.705922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.973 [2024-11-06 13:53:21.705935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:27.973 [2024-11-06 13:53:21.705946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:27.973 [2024-11-06 13:53:21.705957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.973 [2024-11-06 13:53:21.707130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.006 ms, result 0 00:27:29.351  [2024-11-06T13:53:24.269Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-06T13:53:25.206Z] Copying: 58/1024 [MB] (30 MBps) [2024-11-06T13:53:26.143Z] Copying: 88/1024 [MB] (29 MBps) [2024-11-06T13:53:27.079Z] Copying: 118/1024 [MB] (30 MBps) [2024-11-06T13:53:28.015Z] Copying: 150/1024 [MB] (31 MBps) [2024-11-06T13:53:28.949Z] Copying: 181/1024 [MB] (31 MBps) [2024-11-06T13:53:30.321Z] Copying: 212/1024 [MB] (30 MBps) [2024-11-06T13:53:31.255Z] Copying: 244/1024 [MB] (32 MBps) [2024-11-06T13:53:32.187Z] Copying: 278/1024 [MB] (33 MBps) [2024-11-06T13:53:33.121Z] Copying: 308/1024 [MB] (30 MBps) [2024-11-06T13:53:34.053Z] Copying: 337/1024 [MB] (29 MBps) [2024-11-06T13:53:34.986Z] Copying: 366/1024 [MB] (28 MBps) [2024-11-06T13:53:36.354Z] Copying: 394/1024 [MB] (28 MBps) [2024-11-06T13:53:37.286Z] Copying: 422/1024 [MB] (28 MBps) [2024-11-06T13:53:38.219Z] Copying: 451/1024 [MB] (28 MBps) [2024-11-06T13:53:39.155Z] Copying: 479/1024 [MB] (28 MBps) [2024-11-06T13:53:40.090Z] Copying: 508/1024 [MB] (29 MBps) [2024-11-06T13:53:41.026Z] Copying: 537/1024 [MB] (29 MBps) [2024-11-06T13:53:41.960Z] Copying: 568/1024 [MB] (30 MBps) [2024-11-06T13:53:43.334Z] Copying: 599/1024 [MB] (30 MBps) [2024-11-06T13:53:44.269Z] Copying: 630/1024 [MB] (30 MBps) [2024-11-06T13:53:45.205Z] Copying: 659/1024 [MB] (29 MBps) [2024-11-06T13:53:46.141Z] Copying: 688/1024 [MB] (28 MBps) [2024-11-06T13:53:47.077Z] Copying: 717/1024 [MB] (28 MBps) [2024-11-06T13:53:48.012Z] Copying: 746/1024 [MB] (28 MBps) [2024-11-06T13:53:48.969Z] Copying: 774/1024 [MB] (28 MBps) [2024-11-06T13:53:50.356Z] Copying: 803/1024 [MB] (29 MBps) [2024-11-06T13:53:51.293Z] Copying: 832/1024 [MB] (28 MBps) [2024-11-06T13:53:52.229Z] Copying: 860/1024 [MB] (28 MBps) [2024-11-06T13:53:53.165Z] Copying: 888/1024 [MB] (28 MBps) [2024-11-06T13:53:54.102Z] Copying: 919/1024 [MB] (31 MBps) [2024-11-06T13:53:55.038Z] Copying: 950/1024 [MB] (30 MBps) [2024-11-06T13:53:55.975Z] Copying: 981/1024 [MB] (30 MBps) [2024-11-06T13:53:56.542Z] Copying: 1012/1024 [MB] (31 MBps) [2024-11-06T13:53:56.800Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-06 13:53:56.743601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.817 [2024-11-06 13:53:56.743689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:02.817 [2024-11-06 13:53:56.743708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:02.817 [2024-11-06 13:53:56.743728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.817 [2024-11-06 13:53:56.743761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:02.817 [2024-11-06 13:53:56.749164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.817 [2024-11-06 13:53:56.749212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:02.817 [2024-11-06 13:53:56.749228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.379 ms 00:28:02.817 [2024-11-06 13:53:56.749241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.817 [2024-11-06 13:53:56.749487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.817 [2024-11-06 13:53:56.749502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:02.817 [2024-11-06 13:53:56.749515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:28:02.817 [2024-11-06 13:53:56.749527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.817 [2024-11-06 13:53:56.754448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.817 [2024-11-06 13:53:56.754494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:02.817 [2024-11-06 13:53:56.754511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:28:02.817 [2024-11-06 13:53:56.754523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.817 [2024-11-06 13:53:56.761263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.817 [2024-11-06 13:53:56.761305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:02.817 [2024-11-06 13:53:56.761320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:28:02.817 [2024-11-06 13:53:56.761332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.076 [2024-11-06 13:53:56.804879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.076 [2024-11-06 13:53:56.804930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:03.076 [2024-11-06 13:53:56.804946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.481 ms 00:28:03.076 [2024-11-06 13:53:56.804957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.076 [2024-11-06 13:53:56.827044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.076 [2024-11-06 13:53:56.827098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:03.076 [2024-11-06 13:53:56.827114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.036 ms 00:28:03.076 [2024-11-06 13:53:56.827124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.076 [2024-11-06 13:53:56.937130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.076 [2024-11-06 13:53:56.937216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:03.076 [2024-11-06 13:53:56.937234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.951 ms 00:28:03.076 [2024-11-06 13:53:56.937246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.076 [2024-11-06 13:53:56.975800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.076 [2024-11-06 13:53:56.977850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:03.076 [2024-11-06 13:53:56.977890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.530 ms 00:28:03.076 [2024-11-06 13:53:56.977909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.076 [2024-11-06 13:53:57.015939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.076 [2024-11-06 13:53:57.015983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:03.076 [2024-11-06 13:53:57.016012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.970 ms 00:28:03.076 [2024-11-06 13:53:57.016036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.076 [2024-11-06 13:53:57.052676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.076 [2024-11-06 13:53:57.052723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:03.076 [2024-11-06 13:53:57.052738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.596 ms 00:28:03.076 [2024-11-06 13:53:57.052749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.337 [2024-11-06 13:53:57.089510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.337 [2024-11-06 13:53:57.089579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:03.337 [2024-11-06 13:53:57.089594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.673 ms 00:28:03.337 [2024-11-06 13:53:57.089606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.337 [2024-11-06 13:53:57.089648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:03.337 [2024-11-06 13:53:57.089666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:03.337 [2024-11-06 13:53:57.089679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.089994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:03.337 [2024-11-06 13:53:57.090223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:03.338 [2024-11-06 13:53:57.090800] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:03.338 [2024-11-06 13:53:57.090811] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3613176e-faf5-4dd8-a731-104be21354e8 00:28:03.338 [2024-11-06 13:53:57.090828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:03.338 [2024-11-06 13:53:57.090846] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14784 00:28:03.338 [2024-11-06 13:53:57.090861] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13824 00:28:03.338 [2024-11-06 13:53:57.090880] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0694 00:28:03.338 [2024-11-06 13:53:57.090898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:03.338 [2024-11-06 13:53:57.090924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:03.338 [2024-11-06 13:53:57.090941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:03.338 [2024-11-06 13:53:57.090972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:03.338 [2024-11-06 13:53:57.090989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:03.338 [2024-11-06 13:53:57.091003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.338 [2024-11-06 13:53:57.091016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:03.338 [2024-11-06 13:53:57.091041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:28:03.338 [2024-11-06 13:53:57.091054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.338 [2024-11-06 13:53:57.109834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.338 [2024-11-06 13:53:57.109881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:03.338 [2024-11-06 13:53:57.109898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.723 ms 00:28:03.338 [2024-11-06 13:53:57.109918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.339 [2024-11-06 13:53:57.110421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.339 [2024-11-06 13:53:57.110445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:03.339 [2024-11-06 13:53:57.110459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:28:03.339 [2024-11-06 13:53:57.110473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.339 [2024-11-06 13:53:57.163429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.339 [2024-11-06 13:53:57.163490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:03.339 [2024-11-06 13:53:57.163505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.339 [2024-11-06 13:53:57.163516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.339 [2024-11-06 13:53:57.163595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.339 [2024-11-06 13:53:57.163606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:03.339 [2024-11-06 13:53:57.163617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.339 [2024-11-06 13:53:57.163627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.339 [2024-11-06 13:53:57.163706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.339 [2024-11-06 13:53:57.163720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:03.339 [2024-11-06 13:53:57.163734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.339 [2024-11-06 13:53:57.163745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.339 [2024-11-06 13:53:57.163762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.339 [2024-11-06 13:53:57.163772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:03.339 [2024-11-06 13:53:57.163782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.339 [2024-11-06 13:53:57.163792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.339 [2024-11-06 13:53:57.292090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.339 [2024-11-06 13:53:57.292384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:03.339 [2024-11-06 13:53:57.292424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.339 [2024-11-06 13:53:57.292437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.396614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.598 [2024-11-06 13:53:57.396676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.598 [2024-11-06 13:53:57.396691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.598 [2024-11-06 13:53:57.396702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.396794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.598 [2024-11-06 13:53:57.396806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.598 [2024-11-06 13:53:57.396817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.598 [2024-11-06 13:53:57.396833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.396881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.598 [2024-11-06 13:53:57.396893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.598 [2024-11-06 13:53:57.396903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.598 [2024-11-06 13:53:57.396913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.397050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.598 [2024-11-06 13:53:57.397065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.598 [2024-11-06 13:53:57.397075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.598 [2024-11-06 13:53:57.397085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.397129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.598 [2024-11-06 13:53:57.397141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:03.598 [2024-11-06 13:53:57.397151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.598 [2024-11-06 13:53:57.397162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.397199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.598 [2024-11-06 13:53:57.397210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.598 [2024-11-06 13:53:57.397221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.598 [2024-11-06 13:53:57.397231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.397276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.598 [2024-11-06 13:53:57.397289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.598 [2024-11-06 13:53:57.397300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.598 [2024-11-06 13:53:57.397309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.598 [2024-11-06 13:53:57.397427] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 653.900 ms, result 0 00:28:04.535 00:28:04.535 00:28:04.535 13:53:58 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:06.436 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:06.436 13:54:00 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:06.436 13:54:00 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:06.436 13:54:00 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:06.695 Process with pid 76784 is not found 00:28:06.695 Remove shared memory files 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76784 00:28:06.695 13:54:00 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76784 ']' 00:28:06.695 13:54:00 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76784 00:28:06.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76784) - No such process 00:28:06.695 13:54:00 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 76784 is not found' 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:06.695 13:54:00 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:06.695 ************************************ 00:28:06.695 END TEST ftl_restore 00:28:06.695 ************************************ 00:28:06.695 00:28:06.695 real 2m58.038s 00:28:06.695 user 2m43.491s 00:28:06.695 sys 0m15.775s 00:28:06.695 13:54:00 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:06.695 13:54:00 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:06.695 13:54:00 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:06.695 13:54:00 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:06.695 13:54:00 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:06.695 13:54:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:06.695 ************************************ 00:28:06.695 START TEST ftl_dirty_shutdown 00:28:06.695 ************************************ 00:28:06.695 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:06.955 * Looking for test storage... 00:28:06.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.955 13:54:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.956 --rc genhtml_branch_coverage=1 00:28:06.956 --rc genhtml_function_coverage=1 00:28:06.956 --rc genhtml_legend=1 00:28:06.956 --rc geninfo_all_blocks=1 00:28:06.956 --rc geninfo_unexecuted_blocks=1 00:28:06.956 00:28:06.956 ' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.956 --rc genhtml_branch_coverage=1 00:28:06.956 --rc genhtml_function_coverage=1 00:28:06.956 --rc genhtml_legend=1 00:28:06.956 --rc geninfo_all_blocks=1 00:28:06.956 --rc geninfo_unexecuted_blocks=1 00:28:06.956 00:28:06.956 ' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.956 --rc genhtml_branch_coverage=1 00:28:06.956 --rc genhtml_function_coverage=1 00:28:06.956 --rc genhtml_legend=1 00:28:06.956 --rc geninfo_all_blocks=1 00:28:06.956 --rc geninfo_unexecuted_blocks=1 00:28:06.956 00:28:06.956 ' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.956 --rc genhtml_branch_coverage=1 00:28:06.956 --rc genhtml_function_coverage=1 00:28:06.956 --rc genhtml_legend=1 00:28:06.956 --rc geninfo_all_blocks=1 00:28:06.956 --rc geninfo_unexecuted_blocks=1 00:28:06.956 00:28:06.956 ' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78675 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78675 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78675 ']' 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:06.956 13:54:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:07.215 [2024-11-06 13:54:00.980662] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:28:07.215 [2024-11-06 13:54:00.981048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78675 ] 00:28:07.215 [2024-11-06 13:54:01.184532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.474 [2024-11-06 13:54:01.363428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:08.411 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:08.979 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:09.238 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:09.238 { 00:28:09.238 "name": "nvme0n1", 00:28:09.238 "aliases": [ 00:28:09.238 "15b58c42-8857-4b15-a566-736470c34f83" 00:28:09.238 ], 00:28:09.238 "product_name": "NVMe disk", 00:28:09.238 "block_size": 4096, 00:28:09.238 "num_blocks": 1310720, 00:28:09.238 "uuid": "15b58c42-8857-4b15-a566-736470c34f83", 00:28:09.238 "numa_id": -1, 00:28:09.238 "assigned_rate_limits": { 00:28:09.238 "rw_ios_per_sec": 0, 00:28:09.238 "rw_mbytes_per_sec": 0, 00:28:09.238 "r_mbytes_per_sec": 0, 00:28:09.238 "w_mbytes_per_sec": 0 00:28:09.238 }, 00:28:09.238 "claimed": true, 00:28:09.238 "claim_type": "read_many_write_one", 00:28:09.238 "zoned": false, 00:28:09.238 "supported_io_types": { 00:28:09.238 "read": true, 00:28:09.238 "write": true, 00:28:09.238 "unmap": true, 00:28:09.238 "flush": true, 00:28:09.238 "reset": true, 00:28:09.238 "nvme_admin": true, 00:28:09.239 "nvme_io": true, 00:28:09.239 "nvme_io_md": false, 00:28:09.239 "write_zeroes": true, 00:28:09.239 "zcopy": false, 00:28:09.239 "get_zone_info": false, 00:28:09.239 "zone_management": false, 00:28:09.239 "zone_append": false, 00:28:09.239 "compare": true, 00:28:09.239 "compare_and_write": false, 00:28:09.239 "abort": true, 00:28:09.239 "seek_hole": false, 00:28:09.239 "seek_data": false, 00:28:09.239 "copy": true, 00:28:09.239 "nvme_iov_md": false 00:28:09.239 }, 00:28:09.239 "driver_specific": { 00:28:09.239 "nvme": [ 00:28:09.239 { 00:28:09.239 "pci_address": "0000:00:11.0", 00:28:09.239 "trid": { 00:28:09.239 "trtype": "PCIe", 00:28:09.239 "traddr": "0000:00:11.0" 00:28:09.239 }, 00:28:09.239 "ctrlr_data": { 00:28:09.239 "cntlid": 0, 00:28:09.239 "vendor_id": "0x1b36", 00:28:09.239 "model_number": "QEMU NVMe Ctrl", 00:28:09.239 "serial_number": "12341", 00:28:09.239 "firmware_revision": "8.0.0", 00:28:09.239 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:09.239 "oacs": { 00:28:09.239 "security": 0, 00:28:09.239 "format": 1, 00:28:09.239 "firmware": 0, 00:28:09.239 "ns_manage": 1 00:28:09.239 }, 00:28:09.239 "multi_ctrlr": false, 00:28:09.239 "ana_reporting": false 00:28:09.239 }, 00:28:09.239 "vs": { 00:28:09.239 "nvme_version": "1.4" 00:28:09.239 }, 00:28:09.239 "ns_data": { 00:28:09.239 "id": 1, 00:28:09.239 "can_share": false 00:28:09.239 } 00:28:09.239 } 00:28:09.239 ], 00:28:09.239 "mp_policy": "active_passive" 00:28:09.239 } 00:28:09.239 } 00:28:09.239 ]' 00:28:09.239 13:54:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:09.239 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:09.497 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3331a87c-b63d-484b-8205-ddbae3293295 00:28:09.498 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:09.498 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3331a87c-b63d-484b-8205-ddbae3293295 00:28:09.772 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:10.066 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=1f3ce244-c2d0-4f04-a624-0c930b9518b4 00:28:10.066 13:54:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1f3ce244-c2d0-4f04-a624-0c930b9518b4 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:10.325 { 00:28:10.325 "name": "c7c29503-f025-4255-8e93-29ce9b1c695a", 00:28:10.325 "aliases": [ 00:28:10.325 "lvs/nvme0n1p0" 00:28:10.325 ], 00:28:10.325 "product_name": "Logical Volume", 00:28:10.325 "block_size": 4096, 00:28:10.325 "num_blocks": 26476544, 00:28:10.325 "uuid": "c7c29503-f025-4255-8e93-29ce9b1c695a", 00:28:10.325 "assigned_rate_limits": { 00:28:10.325 "rw_ios_per_sec": 0, 00:28:10.325 "rw_mbytes_per_sec": 0, 00:28:10.325 "r_mbytes_per_sec": 0, 00:28:10.325 "w_mbytes_per_sec": 0 00:28:10.325 }, 00:28:10.325 "claimed": false, 00:28:10.325 "zoned": false, 00:28:10.325 "supported_io_types": { 00:28:10.325 "read": true, 00:28:10.325 "write": true, 00:28:10.325 "unmap": true, 00:28:10.325 "flush": false, 00:28:10.325 "reset": true, 00:28:10.325 "nvme_admin": false, 00:28:10.325 "nvme_io": false, 00:28:10.325 "nvme_io_md": false, 00:28:10.325 "write_zeroes": true, 00:28:10.325 "zcopy": false, 00:28:10.325 "get_zone_info": false, 00:28:10.325 "zone_management": false, 00:28:10.325 "zone_append": false, 00:28:10.325 "compare": false, 00:28:10.325 "compare_and_write": false, 00:28:10.325 "abort": false, 00:28:10.325 "seek_hole": true, 00:28:10.325 "seek_data": true, 00:28:10.325 "copy": false, 00:28:10.325 "nvme_iov_md": false 00:28:10.325 }, 00:28:10.325 "driver_specific": { 00:28:10.325 "lvol": { 00:28:10.325 "lvol_store_uuid": "1f3ce244-c2d0-4f04-a624-0c930b9518b4", 00:28:10.325 "base_bdev": "nvme0n1", 00:28:10.325 "thin_provision": true, 00:28:10.325 "num_allocated_clusters": 0, 00:28:10.325 "snapshot": false, 00:28:10.325 "clone": false, 00:28:10.325 "esnap_clone": false 00:28:10.325 } 00:28:10.325 } 00:28:10.325 } 00:28:10.325 ]' 00:28:10.325 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:10.585 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:10.843 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:11.102 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:11.102 { 00:28:11.102 "name": "c7c29503-f025-4255-8e93-29ce9b1c695a", 00:28:11.102 "aliases": [ 00:28:11.102 "lvs/nvme0n1p0" 00:28:11.102 ], 00:28:11.102 "product_name": "Logical Volume", 00:28:11.102 "block_size": 4096, 00:28:11.102 "num_blocks": 26476544, 00:28:11.102 "uuid": "c7c29503-f025-4255-8e93-29ce9b1c695a", 00:28:11.102 "assigned_rate_limits": { 00:28:11.102 "rw_ios_per_sec": 0, 00:28:11.102 "rw_mbytes_per_sec": 0, 00:28:11.102 "r_mbytes_per_sec": 0, 00:28:11.102 "w_mbytes_per_sec": 0 00:28:11.102 }, 00:28:11.102 "claimed": false, 00:28:11.102 "zoned": false, 00:28:11.102 "supported_io_types": { 00:28:11.102 "read": true, 00:28:11.102 "write": true, 00:28:11.102 "unmap": true, 00:28:11.102 "flush": false, 00:28:11.102 "reset": true, 00:28:11.102 "nvme_admin": false, 00:28:11.102 "nvme_io": false, 00:28:11.103 "nvme_io_md": false, 00:28:11.103 "write_zeroes": true, 00:28:11.103 "zcopy": false, 00:28:11.103 "get_zone_info": false, 00:28:11.103 "zone_management": false, 00:28:11.103 "zone_append": false, 00:28:11.103 "compare": false, 00:28:11.103 "compare_and_write": false, 00:28:11.103 "abort": false, 00:28:11.103 "seek_hole": true, 00:28:11.103 "seek_data": true, 00:28:11.103 "copy": false, 00:28:11.103 "nvme_iov_md": false 00:28:11.103 }, 00:28:11.103 "driver_specific": { 00:28:11.103 "lvol": { 00:28:11.103 "lvol_store_uuid": "1f3ce244-c2d0-4f04-a624-0c930b9518b4", 00:28:11.103 "base_bdev": "nvme0n1", 00:28:11.103 "thin_provision": true, 00:28:11.103 "num_allocated_clusters": 0, 00:28:11.103 "snapshot": false, 00:28:11.103 "clone": false, 00:28:11.103 "esnap_clone": false 00:28:11.103 } 00:28:11.103 } 00:28:11.103 } 00:28:11.103 ]' 00:28:11.103 13:54:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:11.103 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:11.103 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:11.103 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:28:11.103 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:28:11.103 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:28:11.103 13:54:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:11.103 13:54:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:11.362 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:11.362 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:11.362 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:11.362 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:11.362 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:11.362 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:11.362 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7c29503-f025-4255-8e93-29ce9b1c695a 00:28:11.620 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:11.620 { 00:28:11.620 "name": "c7c29503-f025-4255-8e93-29ce9b1c695a", 00:28:11.620 "aliases": [ 00:28:11.620 "lvs/nvme0n1p0" 00:28:11.620 ], 00:28:11.620 "product_name": "Logical Volume", 00:28:11.620 "block_size": 4096, 00:28:11.620 "num_blocks": 26476544, 00:28:11.620 "uuid": "c7c29503-f025-4255-8e93-29ce9b1c695a", 00:28:11.620 "assigned_rate_limits": { 00:28:11.620 "rw_ios_per_sec": 0, 00:28:11.620 "rw_mbytes_per_sec": 0, 00:28:11.620 "r_mbytes_per_sec": 0, 00:28:11.620 "w_mbytes_per_sec": 0 00:28:11.620 }, 00:28:11.620 "claimed": false, 00:28:11.620 "zoned": false, 00:28:11.620 "supported_io_types": { 00:28:11.620 "read": true, 00:28:11.620 "write": true, 00:28:11.620 "unmap": true, 00:28:11.620 "flush": false, 00:28:11.620 "reset": true, 00:28:11.620 "nvme_admin": false, 00:28:11.620 "nvme_io": false, 00:28:11.620 "nvme_io_md": false, 00:28:11.620 "write_zeroes": true, 00:28:11.620 "zcopy": false, 00:28:11.620 "get_zone_info": false, 00:28:11.620 "zone_management": false, 00:28:11.620 "zone_append": false, 00:28:11.620 "compare": false, 00:28:11.620 "compare_and_write": false, 00:28:11.620 "abort": false, 00:28:11.620 "seek_hole": true, 00:28:11.620 "seek_data": true, 00:28:11.620 "copy": false, 00:28:11.620 "nvme_iov_md": false 00:28:11.620 }, 00:28:11.620 "driver_specific": { 00:28:11.620 "lvol": { 00:28:11.620 "lvol_store_uuid": "1f3ce244-c2d0-4f04-a624-0c930b9518b4", 00:28:11.620 "base_bdev": "nvme0n1", 00:28:11.620 "thin_provision": true, 00:28:11.620 "num_allocated_clusters": 0, 00:28:11.621 "snapshot": false, 00:28:11.621 "clone": false, 00:28:11.621 "esnap_clone": false 00:28:11.621 } 00:28:11.621 } 00:28:11.621 } 00:28:11.621 ]' 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c7c29503-f025-4255-8e93-29ce9b1c695a --l2p_dram_limit 10' 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:11.621 13:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c7c29503-f025-4255-8e93-29ce9b1c695a --l2p_dram_limit 10 -c nvc0n1p0 00:28:11.880 [2024-11-06 13:54:05.708273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.708552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:11.880 [2024-11-06 13:54:05.708584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:11.880 [2024-11-06 13:54:05.708596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.708693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.708707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.880 [2024-11-06 13:54:05.708722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:11.880 [2024-11-06 13:54:05.708732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.708757] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:11.880 [2024-11-06 13:54:05.709824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:11.880 [2024-11-06 13:54:05.709864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.709876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.880 [2024-11-06 13:54:05.709889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:28:11.880 [2024-11-06 13:54:05.709900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.709983] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e50b95b7-345a-4ae3-a27b-2754588f5046 00:28:11.880 [2024-11-06 13:54:05.711433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.711472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:11.880 [2024-11-06 13:54:05.711485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:11.880 [2024-11-06 13:54:05.711498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.719030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.719074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.880 [2024-11-06 13:54:05.719087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.489 ms 00:28:11.880 [2024-11-06 13:54:05.719100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.719212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.719229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.880 [2024-11-06 13:54:05.719240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:28:11.880 [2024-11-06 13:54:05.719258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.719331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.719347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:11.880 [2024-11-06 13:54:05.719357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:11.880 [2024-11-06 13:54:05.719374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.719400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:11.880 [2024-11-06 13:54:05.724559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.724611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.880 [2024-11-06 13:54:05.724628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:28:11.880 [2024-11-06 13:54:05.724639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.724679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.880 [2024-11-06 13:54:05.724690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:11.880 [2024-11-06 13:54:05.724703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:11.880 [2024-11-06 13:54:05.724713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.880 [2024-11-06 13:54:05.724751] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:11.880 [2024-11-06 13:54:05.724879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:11.880 [2024-11-06 13:54:05.724898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:11.881 [2024-11-06 13:54:05.724913] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:11.881 [2024-11-06 13:54:05.724929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:11.881 [2024-11-06 13:54:05.724941] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:11.881 [2024-11-06 13:54:05.724955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:11.881 [2024-11-06 13:54:05.724965] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:11.881 [2024-11-06 13:54:05.724981] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:11.881 [2024-11-06 13:54:05.724991] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:11.881 [2024-11-06 13:54:05.725003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.881 [2024-11-06 13:54:05.725014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:11.881 [2024-11-06 13:54:05.725048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:28:11.881 [2024-11-06 13:54:05.725070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.881 [2024-11-06 13:54:05.725148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.881 [2024-11-06 13:54:05.725160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:11.881 [2024-11-06 13:54:05.725173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:11.881 [2024-11-06 13:54:05.725183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.881 [2024-11-06 13:54:05.725279] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:11.881 [2024-11-06 13:54:05.725292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:11.881 [2024-11-06 13:54:05.725305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:11.881 [2024-11-06 13:54:05.725338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:11.881 [2024-11-06 13:54:05.725372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:11.881 [2024-11-06 13:54:05.725393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:11.881 [2024-11-06 13:54:05.725402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:11.881 [2024-11-06 13:54:05.725414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:11.881 [2024-11-06 13:54:05.725423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:11.881 [2024-11-06 13:54:05.725435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:11.881 [2024-11-06 13:54:05.725444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:11.881 [2024-11-06 13:54:05.725468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:11.881 [2024-11-06 13:54:05.725502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:11.881 [2024-11-06 13:54:05.725532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:11.881 [2024-11-06 13:54:05.725568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:11.881 [2024-11-06 13:54:05.725598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:11.881 [2024-11-06 13:54:05.725632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:11.881 [2024-11-06 13:54:05.725671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:11.881 [2024-11-06 13:54:05.725682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:11.881 [2024-11-06 13:54:05.725694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:11.881 [2024-11-06 13:54:05.725704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:11.881 [2024-11-06 13:54:05.725716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:11.881 [2024-11-06 13:54:05.725726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:11.881 [2024-11-06 13:54:05.725749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:11.881 [2024-11-06 13:54:05.725761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725771] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:11.881 [2024-11-06 13:54:05.725785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:11.881 [2024-11-06 13:54:05.725796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:11.881 [2024-11-06 13:54:05.725822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:11.881 [2024-11-06 13:54:05.725838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:11.881 [2024-11-06 13:54:05.725848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:11.881 [2024-11-06 13:54:05.725861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:11.881 [2024-11-06 13:54:05.725871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:11.881 [2024-11-06 13:54:05.725883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:11.881 [2024-11-06 13:54:05.725898] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:11.881 [2024-11-06 13:54:05.725914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.881 [2024-11-06 13:54:05.725930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:11.881 [2024-11-06 13:54:05.725944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:11.881 [2024-11-06 13:54:05.725955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:11.881 [2024-11-06 13:54:05.725971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:11.881 [2024-11-06 13:54:05.725983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:11.881 [2024-11-06 13:54:05.725997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:11.881 [2024-11-06 13:54:05.726008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:11.881 [2024-11-06 13:54:05.726022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:11.881 [2024-11-06 13:54:05.726045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:11.881 [2024-11-06 13:54:05.726063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:11.881 [2024-11-06 13:54:05.726075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:11.881 [2024-11-06 13:54:05.726088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:11.881 [2024-11-06 13:54:05.726100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:11.881 [2024-11-06 13:54:05.726116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:11.881 [2024-11-06 13:54:05.726127] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:11.881 [2024-11-06 13:54:05.726142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.881 [2024-11-06 13:54:05.726155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:11.881 [2024-11-06 13:54:05.726169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:11.881 [2024-11-06 13:54:05.726181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:11.881 [2024-11-06 13:54:05.726194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:11.881 [2024-11-06 13:54:05.726207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.881 [2024-11-06 13:54:05.726220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:11.881 [2024-11-06 13:54:05.726232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:28:11.881 [2024-11-06 13:54:05.726245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.881 [2024-11-06 13:54:05.726292] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:11.881 [2024-11-06 13:54:05.726311] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:14.413 [2024-11-06 13:54:08.117878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.413 [2024-11-06 13:54:08.117952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:14.413 [2024-11-06 13:54:08.117970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2391.570 ms 00:28:14.413 [2024-11-06 13:54:08.117984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.413 [2024-11-06 13:54:08.157826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.413 [2024-11-06 13:54:08.157889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.413 [2024-11-06 13:54:08.157905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.325 ms 00:28:14.413 [2024-11-06 13:54:08.157919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.413 [2024-11-06 13:54:08.158117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.413 [2024-11-06 13:54:08.158149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:14.413 [2024-11-06 13:54:08.158161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:14.413 [2024-11-06 13:54:08.158181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.413 [2024-11-06 13:54:08.205087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.413 [2024-11-06 13:54:08.205151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.413 [2024-11-06 13:54:08.205167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.858 ms 00:28:14.413 [2024-11-06 13:54:08.205211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.413 [2024-11-06 13:54:08.205270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.413 [2024-11-06 13:54:08.205289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.413 [2024-11-06 13:54:08.205301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:14.413 [2024-11-06 13:54:08.205313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.413 [2024-11-06 13:54:08.205837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.413 [2024-11-06 13:54:08.205857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.413 [2024-11-06 13:54:08.205868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:28:14.413 [2024-11-06 13:54:08.205881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.413 [2024-11-06 13:54:08.205988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.413 [2024-11-06 13:54:08.206001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.414 [2024-11-06 13:54:08.206015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:14.414 [2024-11-06 13:54:08.206031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.414 [2024-11-06 13:54:08.227896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.414 [2024-11-06 13:54:08.228241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.414 [2024-11-06 13:54:08.228271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.820 ms 00:28:14.414 [2024-11-06 13:54:08.228287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.414 [2024-11-06 13:54:08.258367] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:14.414 [2024-11-06 13:54:08.261807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.414 [2024-11-06 13:54:08.261852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:14.414 [2024-11-06 13:54:08.261870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.382 ms 00:28:14.414 [2024-11-06 13:54:08.261881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.414 [2024-11-06 13:54:08.337318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.414 [2024-11-06 13:54:08.337389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:14.414 [2024-11-06 13:54:08.337410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.370 ms 00:28:14.414 [2024-11-06 13:54:08.337422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.414 [2024-11-06 13:54:08.337636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.414 [2024-11-06 13:54:08.337653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:14.414 [2024-11-06 13:54:08.337672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:28:14.414 [2024-11-06 13:54:08.337682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.414 [2024-11-06 13:54:08.378694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.414 [2024-11-06 13:54:08.378997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:14.414 [2024-11-06 13:54:08.379045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.923 ms 00:28:14.414 [2024-11-06 13:54:08.379059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.673 [2024-11-06 13:54:08.419873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.673 [2024-11-06 13:54:08.420168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:14.673 [2024-11-06 13:54:08.420203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.730 ms 00:28:14.673 [2024-11-06 13:54:08.420214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.673 [2024-11-06 13:54:08.420968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.673 [2024-11-06 13:54:08.420990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:14.673 [2024-11-06 13:54:08.421004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:28:14.673 [2024-11-06 13:54:08.421029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.673 [2024-11-06 13:54:08.530401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.673 [2024-11-06 13:54:08.530469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:14.673 [2024-11-06 13:54:08.530496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.229 ms 00:28:14.673 [2024-11-06 13:54:08.530508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.673 [2024-11-06 13:54:08.574007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.673 [2024-11-06 13:54:08.574302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:14.673 [2024-11-06 13:54:08.574335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.330 ms 00:28:14.673 [2024-11-06 13:54:08.574348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.673 [2024-11-06 13:54:08.618269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.673 [2024-11-06 13:54:08.618337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:14.673 [2024-11-06 13:54:08.618360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.836 ms 00:28:14.673 [2024-11-06 13:54:08.618372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.932 [2024-11-06 13:54:08.661958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.932 [2024-11-06 13:54:08.662047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:14.932 [2024-11-06 13:54:08.662069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.495 ms 00:28:14.932 [2024-11-06 13:54:08.662081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.932 [2024-11-06 13:54:08.662164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.932 [2024-11-06 13:54:08.662179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:14.932 [2024-11-06 13:54:08.662198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:14.932 [2024-11-06 13:54:08.662209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.932 [2024-11-06 13:54:08.662349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.932 [2024-11-06 13:54:08.662363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:14.932 [2024-11-06 13:54:08.662381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:14.932 [2024-11-06 13:54:08.662402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.932 [2024-11-06 13:54:08.663668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2954.857 ms, result 0 00:28:14.932 { 00:28:14.932 "name": "ftl0", 00:28:14.932 "uuid": "e50b95b7-345a-4ae3-a27b-2754588f5046" 00:28:14.932 } 00:28:14.932 13:54:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:14.932 13:54:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:14.932 13:54:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:14.932 13:54:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:15.192 13:54:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:15.192 /dev/nbd0 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:15.192 1+0 records in 00:28:15.192 1+0 records out 00:28:15.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388109 s, 10.6 MB/s 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:28:15.192 13:54:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:15.451 [2024-11-06 13:54:09.277599] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:28:15.451 [2024-11-06 13:54:09.277792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78823 ] 00:28:15.710 [2024-11-06 13:54:09.481864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.710 [2024-11-06 13:54:09.650429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.087  [2024-11-06T13:54:12.446Z] Copying: 181/1024 [MB] (181 MBps) [2024-11-06T13:54:13.383Z] Copying: 366/1024 [MB] (184 MBps) [2024-11-06T13:54:14.321Z] Copying: 549/1024 [MB] (183 MBps) [2024-11-06T13:54:15.257Z] Copying: 722/1024 [MB] (173 MBps) [2024-11-06T13:54:16.260Z] Copying: 879/1024 [MB] (156 MBps) [2024-11-06T13:54:17.639Z] Copying: 1024/1024 [MB] (average 171 MBps) 00:28:23.656 00:28:23.656 13:54:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:25.560 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:25.818 [2024-11-06 13:54:19.611264] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:28:25.818 [2024-11-06 13:54:19.613265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78927 ] 00:28:26.078 [2024-11-06 13:54:19.812329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.078 [2024-11-06 13:54:19.952139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.454  [2024-11-06T13:54:22.372Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-06T13:54:23.748Z] Copying: 36/1024 [MB] (17 MBps) [2024-11-06T13:54:24.682Z] Copying: 54/1024 [MB] (18 MBps) [2024-11-06T13:54:25.615Z] Copying: 73/1024 [MB] (18 MBps) [2024-11-06T13:54:26.549Z] Copying: 91/1024 [MB] (18 MBps) [2024-11-06T13:54:27.563Z] Copying: 110/1024 [MB] (18 MBps) [2024-11-06T13:54:28.494Z] Copying: 128/1024 [MB] (18 MBps) [2024-11-06T13:54:29.426Z] Copying: 147/1024 [MB] (18 MBps) [2024-11-06T13:54:30.357Z] Copying: 165/1024 [MB] (18 MBps) [2024-11-06T13:54:31.730Z] Copying: 183/1024 [MB] (18 MBps) [2024-11-06T13:54:32.663Z] Copying: 202/1024 [MB] (18 MBps) [2024-11-06T13:54:33.598Z] Copying: 221/1024 [MB] (18 MBps) [2024-11-06T13:54:34.608Z] Copying: 240/1024 [MB] (18 MBps) [2024-11-06T13:54:35.543Z] Copying: 258/1024 [MB] (18 MBps) [2024-11-06T13:54:36.479Z] Copying: 277/1024 [MB] (18 MBps) [2024-11-06T13:54:37.415Z] Copying: 296/1024 [MB] (18 MBps) [2024-11-06T13:54:38.349Z] Copying: 315/1024 [MB] (18 MBps) [2024-11-06T13:54:39.724Z] Copying: 333/1024 [MB] (18 MBps) [2024-11-06T13:54:40.658Z] Copying: 352/1024 [MB] (18 MBps) [2024-11-06T13:54:41.591Z] Copying: 370/1024 [MB] (18 MBps) [2024-11-06T13:54:42.529Z] Copying: 389/1024 [MB] (18 MBps) [2024-11-06T13:54:43.463Z] Copying: 408/1024 [MB] (18 MBps) [2024-11-06T13:54:44.399Z] Copying: 426/1024 [MB] (17 MBps) [2024-11-06T13:54:45.774Z] Copying: 444/1024 [MB] (17 MBps) [2024-11-06T13:54:46.707Z] Copying: 461/1024 [MB] (17 MBps) [2024-11-06T13:54:47.642Z] Copying: 479/1024 [MB] (17 MBps) [2024-11-06T13:54:48.577Z] Copying: 496/1024 [MB] (17 MBps) [2024-11-06T13:54:49.512Z] Copying: 514/1024 [MB] (17 MBps) [2024-11-06T13:54:50.492Z] Copying: 531/1024 [MB] (17 MBps) [2024-11-06T13:54:51.427Z] Copying: 548/1024 [MB] (17 MBps) [2024-11-06T13:54:52.361Z] Copying: 566/1024 [MB] (17 MBps) [2024-11-06T13:54:53.735Z] Copying: 585/1024 [MB] (18 MBps) [2024-11-06T13:54:54.670Z] Copying: 603/1024 [MB] (17 MBps) [2024-11-06T13:54:55.604Z] Copying: 620/1024 [MB] (17 MBps) [2024-11-06T13:54:56.540Z] Copying: 638/1024 [MB] (17 MBps) [2024-11-06T13:54:57.475Z] Copying: 655/1024 [MB] (17 MBps) [2024-11-06T13:54:58.411Z] Copying: 673/1024 [MB] (17 MBps) [2024-11-06T13:54:59.795Z] Copying: 691/1024 [MB] (18 MBps) [2024-11-06T13:55:00.370Z] Copying: 709/1024 [MB] (17 MBps) [2024-11-06T13:55:01.745Z] Copying: 727/1024 [MB] (17 MBps) [2024-11-06T13:55:02.681Z] Copying: 745/1024 [MB] (18 MBps) [2024-11-06T13:55:03.615Z] Copying: 763/1024 [MB] (18 MBps) [2024-11-06T13:55:04.550Z] Copying: 781/1024 [MB] (18 MBps) [2024-11-06T13:55:05.485Z] Copying: 799/1024 [MB] (17 MBps) [2024-11-06T13:55:06.420Z] Copying: 817/1024 [MB] (18 MBps) [2024-11-06T13:55:07.354Z] Copying: 835/1024 [MB] (17 MBps) [2024-11-06T13:55:08.727Z] Copying: 851/1024 [MB] (16 MBps) [2024-11-06T13:55:09.693Z] Copying: 868/1024 [MB] (16 MBps) [2024-11-06T13:55:10.627Z] Copying: 884/1024 [MB] (16 MBps) [2024-11-06T13:55:11.563Z] Copying: 900/1024 [MB] (16 MBps) [2024-11-06T13:55:12.498Z] Copying: 917/1024 [MB] (16 MBps) [2024-11-06T13:55:13.433Z] Copying: 933/1024 [MB] (16 MBps) [2024-11-06T13:55:14.369Z] Copying: 949/1024 [MB] (16 MBps) [2024-11-06T13:55:15.747Z] Copying: 966/1024 [MB] (16 MBps) [2024-11-06T13:55:16.682Z] Copying: 982/1024 [MB] (15 MBps) [2024-11-06T13:55:17.619Z] Copying: 998/1024 [MB] (15 MBps) [2024-11-06T13:55:18.186Z] Copying: 1013/1024 [MB] (15 MBps) [2024-11-06T13:55:19.579Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:29:25.596 00:29:25.596 13:55:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:29:25.596 13:55:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:29:25.596 13:55:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:25.866 [2024-11-06 13:55:19.753557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.866 [2024-11-06 13:55:19.753842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:25.866 [2024-11-06 13:55:19.753873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:25.866 [2024-11-06 13:55:19.753889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.866 [2024-11-06 13:55:19.753941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:25.866 [2024-11-06 13:55:19.758652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.866 [2024-11-06 13:55:19.758705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:25.866 [2024-11-06 13:55:19.758724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:29:25.866 [2024-11-06 13:55:19.758736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.866 [2024-11-06 13:55:19.760982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.866 [2024-11-06 13:55:19.761267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:25.866 [2024-11-06 13:55:19.761301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:29:25.866 [2024-11-06 13:55:19.761315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.866 [2024-11-06 13:55:19.779263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.866 [2024-11-06 13:55:19.779346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:25.866 [2024-11-06 13:55:19.779369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.896 ms 00:29:25.866 [2024-11-06 13:55:19.779382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.866 [2024-11-06 13:55:19.785226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.866 [2024-11-06 13:55:19.785282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:25.866 [2024-11-06 13:55:19.785301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 00:29:25.866 [2024-11-06 13:55:19.785313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.866 [2024-11-06 13:55:19.829620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.866 [2024-11-06 13:55:19.829934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:25.866 [2024-11-06 13:55:19.829968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.166 ms 00:29:25.866 [2024-11-06 13:55:19.829980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.126 [2024-11-06 13:55:19.856541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.126 [2024-11-06 13:55:19.856825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:26.126 [2024-11-06 13:55:19.856860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.456 ms 00:29:26.126 [2024-11-06 13:55:19.856876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.126 [2024-11-06 13:55:19.857122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.126 [2024-11-06 13:55:19.857140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:26.126 [2024-11-06 13:55:19.857155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:29:26.126 [2024-11-06 13:55:19.857167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.126 [2024-11-06 13:55:19.900871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.126 [2024-11-06 13:55:19.900941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:26.126 [2024-11-06 13:55:19.900963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.673 ms 00:29:26.126 [2024-11-06 13:55:19.900974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.126 [2024-11-06 13:55:19.942887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.126 [2024-11-06 13:55:19.942955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:26.126 [2024-11-06 13:55:19.942993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.782 ms 00:29:26.126 [2024-11-06 13:55:19.943005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.126 [2024-11-06 13:55:19.984861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.126 [2024-11-06 13:55:19.984920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:26.126 [2024-11-06 13:55:19.984941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.733 ms 00:29:26.126 [2024-11-06 13:55:19.984953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.126 [2024-11-06 13:55:20.028012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.126 [2024-11-06 13:55:20.028094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:26.126 [2024-11-06 13:55:20.028114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.854 ms 00:29:26.126 [2024-11-06 13:55:20.028125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.126 [2024-11-06 13:55:20.028219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:26.126 [2024-11-06 13:55:20.028264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:26.126 [2024-11-06 13:55:20.028452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.028990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:26.127 [2024-11-06 13:55:20.029750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:26.127 [2024-11-06 13:55:20.029764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e50b95b7-345a-4ae3-a27b-2754588f5046 00:29:26.127 [2024-11-06 13:55:20.029776] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:26.127 [2024-11-06 13:55:20.029793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:26.127 [2024-11-06 13:55:20.029803] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:26.127 [2024-11-06 13:55:20.029821] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:26.127 [2024-11-06 13:55:20.029832] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:26.128 [2024-11-06 13:55:20.029846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:26.128 [2024-11-06 13:55:20.029858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:26.128 [2024-11-06 13:55:20.029871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:26.128 [2024-11-06 13:55:20.029881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:26.128 [2024-11-06 13:55:20.029896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.128 [2024-11-06 13:55:20.029908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:26.128 [2024-11-06 13:55:20.029924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:29:26.128 [2024-11-06 13:55:20.029943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.128 [2024-11-06 13:55:20.051506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.128 [2024-11-06 13:55:20.051578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:26.128 [2024-11-06 13:55:20.051598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.436 ms 00:29:26.128 [2024-11-06 13:55:20.051611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.128 [2024-11-06 13:55:20.052199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.128 [2024-11-06 13:55:20.052212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:26.128 [2024-11-06 13:55:20.052228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:29:26.128 [2024-11-06 13:55:20.052239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.387 [2024-11-06 13:55:20.130773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.387 [2024-11-06 13:55:20.131091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:26.387 [2024-11-06 13:55:20.131235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.387 [2024-11-06 13:55:20.131348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.387 [2024-11-06 13:55:20.131602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.387 [2024-11-06 13:55:20.131727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:26.387 [2024-11-06 13:55:20.131846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.387 [2024-11-06 13:55:20.131968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.387 [2024-11-06 13:55:20.132335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.387 [2024-11-06 13:55:20.132495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:26.387 [2024-11-06 13:55:20.132624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.387 [2024-11-06 13:55:20.132675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.387 [2024-11-06 13:55:20.132792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.387 [2024-11-06 13:55:20.132840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:26.387 [2024-11-06 13:55:20.133002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.387 [2024-11-06 13:55:20.133108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.387 [2024-11-06 13:55:20.279048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.387 [2024-11-06 13:55:20.279416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:26.387 [2024-11-06 13:55:20.279546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.387 [2024-11-06 13:55:20.279603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.396454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.646 [2024-11-06 13:55:20.396757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:26.646 [2024-11-06 13:55:20.396857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.646 [2024-11-06 13:55:20.396901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.397098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.646 [2024-11-06 13:55:20.397172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:26.646 [2024-11-06 13:55:20.397220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.646 [2024-11-06 13:55:20.397259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.397364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.646 [2024-11-06 13:55:20.397516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:26.646 [2024-11-06 13:55:20.397568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.646 [2024-11-06 13:55:20.397604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.397787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.646 [2024-11-06 13:55:20.397833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:26.646 [2024-11-06 13:55:20.397888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.646 [2024-11-06 13:55:20.397926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.398144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.646 [2024-11-06 13:55:20.398201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:26.646 [2024-11-06 13:55:20.398425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.646 [2024-11-06 13:55:20.398485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.398569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.646 [2024-11-06 13:55:20.398668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:26.646 [2024-11-06 13:55:20.398726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.646 [2024-11-06 13:55:20.398763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.398915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:26.646 [2024-11-06 13:55:20.399004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:26.646 [2024-11-06 13:55:20.399099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:26.646 [2024-11-06 13:55:20.399202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.646 [2024-11-06 13:55:20.399403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 645.774 ms, result 0 00:29:26.646 true 00:29:26.646 13:55:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78675 00:29:26.646 13:55:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78675 00:29:26.646 13:55:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:29:26.646 [2024-11-06 13:55:20.521851] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:29:26.646 [2024-11-06 13:55:20.522246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79549 ] 00:29:26.905 [2024-11-06 13:55:20.700701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.905 [2024-11-06 13:55:20.828522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.282  [2024-11-06T13:55:23.201Z] Copying: 180/1024 [MB] (180 MBps) [2024-11-06T13:55:24.578Z] Copying: 366/1024 [MB] (186 MBps) [2024-11-06T13:55:25.514Z] Copying: 551/1024 [MB] (184 MBps) [2024-11-06T13:55:26.450Z] Copying: 732/1024 [MB] (181 MBps) [2024-11-06T13:55:27.016Z] Copying: 918/1024 [MB] (185 MBps) [2024-11-06T13:55:27.954Z] Copying: 1024/1024 [MB] (average 183 MBps) 00:29:33.971 00:29:34.230 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78675 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:34.230 13:55:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:34.230 [2024-11-06 13:55:28.087900] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:29:34.230 [2024-11-06 13:55:28.088105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79624 ] 00:29:34.488 [2024-11-06 13:55:28.278679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.488 [2024-11-06 13:55:28.413390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.054 [2024-11-06 13:55:28.797349] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.054 [2024-11-06 13:55:28.797424] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.054 [2024-11-06 13:55:28.864366] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:35.054 [2024-11-06 13:55:28.864709] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:35.054 [2024-11-06 13:55:28.864865] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:35.314 [2024-11-06 13:55:29.131099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.314 [2024-11-06 13:55:29.131170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:35.314 [2024-11-06 13:55:29.131188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:35.314 [2024-11-06 13:55:29.131200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.314 [2024-11-06 13:55:29.131277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.314 [2024-11-06 13:55:29.131292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:35.314 [2024-11-06 13:55:29.131304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:35.314 [2024-11-06 13:55:29.131315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.314 [2024-11-06 13:55:29.131339] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:35.314 [2024-11-06 13:55:29.132458] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:35.314 [2024-11-06 13:55:29.132485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.314 [2024-11-06 13:55:29.132497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:35.314 [2024-11-06 13:55:29.132508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:29:35.314 [2024-11-06 13:55:29.132518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.314 [2024-11-06 13:55:29.134099] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:35.315 [2024-11-06 13:55:29.155663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.155741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:35.315 [2024-11-06 13:55:29.155759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.561 ms 00:29:35.315 [2024-11-06 13:55:29.155770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.155917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.155936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:35.315 [2024-11-06 13:55:29.155948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:35.315 [2024-11-06 13:55:29.155959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.163845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.164156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:35.315 [2024-11-06 13:55:29.164192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.749 ms 00:29:35.315 [2024-11-06 13:55:29.164206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.164321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.164336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:35.315 [2024-11-06 13:55:29.164347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:29:35.315 [2024-11-06 13:55:29.164359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.164436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.164451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:35.315 [2024-11-06 13:55:29.164463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:35.315 [2024-11-06 13:55:29.164473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.164520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:35.315 [2024-11-06 13:55:29.169728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.169767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:35.315 [2024-11-06 13:55:29.169781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.216 ms 00:29:35.315 [2024-11-06 13:55:29.169791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.169828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.169839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:35.315 [2024-11-06 13:55:29.169849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:35.315 [2024-11-06 13:55:29.169860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.169939] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:35.315 [2024-11-06 13:55:29.169963] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:35.315 [2024-11-06 13:55:29.170070] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:35.315 [2024-11-06 13:55:29.170102] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:35.315 [2024-11-06 13:55:29.170202] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:35.315 [2024-11-06 13:55:29.170219] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:35.315 [2024-11-06 13:55:29.170238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:35.315 [2024-11-06 13:55:29.170257] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:35.315 [2024-11-06 13:55:29.170280] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:35.315 [2024-11-06 13:55:29.170298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:35.315 [2024-11-06 13:55:29.170324] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:35.315 [2024-11-06 13:55:29.170339] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:35.315 [2024-11-06 13:55:29.170357] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:35.315 [2024-11-06 13:55:29.170369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.170379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:35.315 [2024-11-06 13:55:29.170391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:29:35.315 [2024-11-06 13:55:29.170401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.170520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.315 [2024-11-06 13:55:29.170545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:35.315 [2024-11-06 13:55:29.170563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:29:35.315 [2024-11-06 13:55:29.170582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.315 [2024-11-06 13:55:29.170709] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:35.315 [2024-11-06 13:55:29.170732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:35.315 [2024-11-06 13:55:29.170749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.315 [2024-11-06 13:55:29.170763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.315 [2024-11-06 13:55:29.170778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:35.315 [2024-11-06 13:55:29.170792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:35.315 [2024-11-06 13:55:29.170810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:35.315 [2024-11-06 13:55:29.170828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:35.315 [2024-11-06 13:55:29.170842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:35.315 [2024-11-06 13:55:29.170856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.315 [2024-11-06 13:55:29.170870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:35.315 [2024-11-06 13:55:29.170899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:35.315 [2024-11-06 13:55:29.170915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.315 [2024-11-06 13:55:29.170929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:35.315 [2024-11-06 13:55:29.170944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:35.315 [2024-11-06 13:55:29.170957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.315 [2024-11-06 13:55:29.170970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:35.315 [2024-11-06 13:55:29.170987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:35.315 [2024-11-06 13:55:29.171000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:35.315 [2024-11-06 13:55:29.171042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.315 [2024-11-06 13:55:29.171076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:35.315 [2024-11-06 13:55:29.171090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.315 [2024-11-06 13:55:29.171117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:35.315 [2024-11-06 13:55:29.171130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.315 [2024-11-06 13:55:29.171159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:35.315 [2024-11-06 13:55:29.171175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.315 [2024-11-06 13:55:29.171202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:35.315 [2024-11-06 13:55:29.171218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.315 [2024-11-06 13:55:29.171245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:35.315 [2024-11-06 13:55:29.171260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:35.315 [2024-11-06 13:55:29.171279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.315 [2024-11-06 13:55:29.171292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:35.315 [2024-11-06 13:55:29.171305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:35.315 [2024-11-06 13:55:29.171319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:35.315 [2024-11-06 13:55:29.171349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:35.315 [2024-11-06 13:55:29.171362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:35.315 [2024-11-06 13:55:29.171394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:35.315 [2024-11-06 13:55:29.171413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.315 [2024-11-06 13:55:29.171435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.315 [2024-11-06 13:55:29.171450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:35.315 [2024-11-06 13:55:29.171466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:35.315 [2024-11-06 13:55:29.171482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:35.315 [2024-11-06 13:55:29.171496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:35.315 [2024-11-06 13:55:29.171511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:35.315 [2024-11-06 13:55:29.171527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:35.315 [2024-11-06 13:55:29.171546] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:35.315 [2024-11-06 13:55:29.171565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.316 [2024-11-06 13:55:29.171581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:35.316 [2024-11-06 13:55:29.171596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:35.316 [2024-11-06 13:55:29.171612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:35.316 [2024-11-06 13:55:29.171626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:35.316 [2024-11-06 13:55:29.171643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:35.316 [2024-11-06 13:55:29.171659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:35.316 [2024-11-06 13:55:29.171674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:35.316 [2024-11-06 13:55:29.171688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:35.316 [2024-11-06 13:55:29.171703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:35.316 [2024-11-06 13:55:29.171719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:35.316 [2024-11-06 13:55:29.171738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:35.316 [2024-11-06 13:55:29.171757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:35.316 [2024-11-06 13:55:29.171771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:35.316 [2024-11-06 13:55:29.171786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:35.316 [2024-11-06 13:55:29.171800] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:35.316 [2024-11-06 13:55:29.171817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.316 [2024-11-06 13:55:29.171833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:35.316 [2024-11-06 13:55:29.171851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:35.316 [2024-11-06 13:55:29.171867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:35.316 [2024-11-06 13:55:29.171884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:35.316 [2024-11-06 13:55:29.171900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.316 [2024-11-06 13:55:29.171916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:35.316 [2024-11-06 13:55:29.171935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:29:35.316 [2024-11-06 13:55:29.171950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.316 [2024-11-06 13:55:29.213796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.316 [2024-11-06 13:55:29.213853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:35.316 [2024-11-06 13:55:29.213870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.775 ms 00:29:35.316 [2024-11-06 13:55:29.213881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.316 [2024-11-06 13:55:29.213985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.316 [2024-11-06 13:55:29.214002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:35.316 [2024-11-06 13:55:29.214031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:35.316 [2024-11-06 13:55:29.214044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.316 [2024-11-06 13:55:29.279808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.316 [2024-11-06 13:55:29.280053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:35.316 [2024-11-06 13:55:29.280093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.665 ms 00:29:35.316 [2024-11-06 13:55:29.280109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.316 [2024-11-06 13:55:29.280188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.316 [2024-11-06 13:55:29.280204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:35.316 [2024-11-06 13:55:29.280218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:35.316 [2024-11-06 13:55:29.280231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.316 [2024-11-06 13:55:29.280781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.316 [2024-11-06 13:55:29.280804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:35.316 [2024-11-06 13:55:29.280815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:29:35.316 [2024-11-06 13:55:29.280831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.316 [2024-11-06 13:55:29.280967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.316 [2024-11-06 13:55:29.280982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:35.316 [2024-11-06 13:55:29.280993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:35.316 [2024-11-06 13:55:29.281003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.300854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.300915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:35.576 [2024-11-06 13:55:29.300932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.805 ms 00:29:35.576 [2024-11-06 13:55:29.300943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.321316] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:35.576 [2024-11-06 13:55:29.321381] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:35.576 [2024-11-06 13:55:29.321398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.321410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:35.576 [2024-11-06 13:55:29.321423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.285 ms 00:29:35.576 [2024-11-06 13:55:29.321434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.353558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.353640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:35.576 [2024-11-06 13:55:29.353689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.052 ms 00:29:35.576 [2024-11-06 13:55:29.353700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.373429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.373488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:35.576 [2024-11-06 13:55:29.373504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.642 ms 00:29:35.576 [2024-11-06 13:55:29.373514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.393056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.393113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:35.576 [2024-11-06 13:55:29.393129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.486 ms 00:29:35.576 [2024-11-06 13:55:29.393139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.394033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.394061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:35.576 [2024-11-06 13:55:29.394074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:29:35.576 [2024-11-06 13:55:29.394084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.484421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.484495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:35.576 [2024-11-06 13:55:29.484513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.290 ms 00:29:35.576 [2024-11-06 13:55:29.484525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.497644] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:35.576 [2024-11-06 13:55:29.501201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.501404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:35.576 [2024-11-06 13:55:29.501434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.591 ms 00:29:35.576 [2024-11-06 13:55:29.501451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.501586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.501601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:35.576 [2024-11-06 13:55:29.501613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:35.576 [2024-11-06 13:55:29.501623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.501729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.501743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:35.576 [2024-11-06 13:55:29.501754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:35.576 [2024-11-06 13:55:29.501764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.501790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.501806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:35.576 [2024-11-06 13:55:29.501816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:35.576 [2024-11-06 13:55:29.501826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.501870] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:35.576 [2024-11-06 13:55:29.501884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.501894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:35.576 [2024-11-06 13:55:29.501904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:35.576 [2024-11-06 13:55:29.501914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.540389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.540454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:35.576 [2024-11-06 13:55:29.540489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.444 ms 00:29:35.576 [2024-11-06 13:55:29.540501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.540617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.576 [2024-11-06 13:55:29.540631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:35.576 [2024-11-06 13:55:29.540643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:35.576 [2024-11-06 13:55:29.540653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.576 [2024-11-06 13:55:29.541988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.371 ms, result 0 00:29:36.953  [2024-11-06T13:55:31.871Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-06T13:55:32.808Z] Copying: 56/1024 [MB] (30 MBps) [2024-11-06T13:55:33.745Z] Copying: 83/1024 [MB] (26 MBps) [2024-11-06T13:55:34.683Z] Copying: 108/1024 [MB] (24 MBps) [2024-11-06T13:55:35.620Z] Copying: 137/1024 [MB] (29 MBps) [2024-11-06T13:55:36.580Z] Copying: 166/1024 [MB] (28 MBps) [2024-11-06T13:55:37.957Z] Copying: 192/1024 [MB] (26 MBps) [2024-11-06T13:55:38.893Z] Copying: 220/1024 [MB] (27 MBps) [2024-11-06T13:55:39.831Z] Copying: 247/1024 [MB] (27 MBps) [2024-11-06T13:55:40.768Z] Copying: 276/1024 [MB] (28 MBps) [2024-11-06T13:55:41.703Z] Copying: 305/1024 [MB] (28 MBps) [2024-11-06T13:55:42.640Z] Copying: 334/1024 [MB] (28 MBps) [2024-11-06T13:55:43.577Z] Copying: 362/1024 [MB] (28 MBps) [2024-11-06T13:55:44.951Z] Copying: 392/1024 [MB] (29 MBps) [2024-11-06T13:55:45.886Z] Copying: 422/1024 [MB] (29 MBps) [2024-11-06T13:55:46.820Z] Copying: 453/1024 [MB] (30 MBps) [2024-11-06T13:55:47.755Z] Copying: 483/1024 [MB] (30 MBps) [2024-11-06T13:55:48.690Z] Copying: 512/1024 [MB] (28 MBps) [2024-11-06T13:55:49.624Z] Copying: 542/1024 [MB] (30 MBps) [2024-11-06T13:55:50.559Z] Copying: 574/1024 [MB] (31 MBps) [2024-11-06T13:55:51.937Z] Copying: 605/1024 [MB] (30 MBps) [2024-11-06T13:55:52.874Z] Copying: 635/1024 [MB] (29 MBps) [2024-11-06T13:55:53.809Z] Copying: 664/1024 [MB] (29 MBps) [2024-11-06T13:55:54.744Z] Copying: 695/1024 [MB] (31 MBps) [2024-11-06T13:55:55.681Z] Copying: 726/1024 [MB] (30 MBps) [2024-11-06T13:55:56.618Z] Copying: 757/1024 [MB] (30 MBps) [2024-11-06T13:55:57.555Z] Copying: 788/1024 [MB] (31 MBps) [2024-11-06T13:55:58.931Z] Copying: 820/1024 [MB] (31 MBps) [2024-11-06T13:55:59.866Z] Copying: 852/1024 [MB] (31 MBps) [2024-11-06T13:56:00.801Z] Copying: 884/1024 [MB] (32 MBps) [2024-11-06T13:56:01.738Z] Copying: 915/1024 [MB] (31 MBps) [2024-11-06T13:56:02.674Z] Copying: 947/1024 [MB] (32 MBps) [2024-11-06T13:56:03.609Z] Copying: 980/1024 [MB] (32 MBps) [2024-11-06T13:56:04.988Z] Copying: 1012/1024 [MB] (32 MBps) [2024-11-06T13:56:04.988Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-06T13:56:04.988Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-06 13:56:04.791932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.005 [2024-11-06 13:56:04.792147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:11.005 [2024-11-06 13:56:04.792173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:11.005 [2024-11-06 13:56:04.792185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.005 [2024-11-06 13:56:04.795240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:11.005 [2024-11-06 13:56:04.800297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.005 [2024-11-06 13:56:04.800443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:11.005 [2024-11-06 13:56:04.800540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:30:11.005 [2024-11-06 13:56:04.800577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.005 [2024-11-06 13:56:04.809851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.005 [2024-11-06 13:56:04.810028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:11.005 [2024-11-06 13:56:04.810127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.685 ms 00:30:11.005 [2024-11-06 13:56:04.810165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.005 [2024-11-06 13:56:04.831506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.005 [2024-11-06 13:56:04.831667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:11.005 [2024-11-06 13:56:04.831779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.295 ms 00:30:11.005 [2024-11-06 13:56:04.831798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.005 [2024-11-06 13:56:04.837007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.005 [2024-11-06 13:56:04.837172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:11.005 [2024-11-06 13:56:04.837249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.169 ms 00:30:11.005 [2024-11-06 13:56:04.837286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.005 [2024-11-06 13:56:04.873305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.005 [2024-11-06 13:56:04.873453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:11.005 [2024-11-06 13:56:04.873554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.935 ms 00:30:11.005 [2024-11-06 13:56:04.873591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.005 [2024-11-06 13:56:04.894520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.005 [2024-11-06 13:56:04.894669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:11.005 [2024-11-06 13:56:04.894798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.871 ms 00:30:11.005 [2024-11-06 13:56:04.894835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.265 [2024-11-06 13:56:04.996625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.265 [2024-11-06 13:56:04.996668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:11.265 [2024-11-06 13:56:04.996691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.729 ms 00:30:11.265 [2024-11-06 13:56:04.996702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.265 [2024-11-06 13:56:05.034011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.265 [2024-11-06 13:56:05.034207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:11.265 [2024-11-06 13:56:05.034234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.289 ms 00:30:11.265 [2024-11-06 13:56:05.034252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.265 [2024-11-06 13:56:05.070600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.265 [2024-11-06 13:56:05.070638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:11.265 [2024-11-06 13:56:05.070652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.300 ms 00:30:11.265 [2024-11-06 13:56:05.070678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.266 [2024-11-06 13:56:05.105408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.266 [2024-11-06 13:56:05.105440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:11.266 [2024-11-06 13:56:05.105453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.691 ms 00:30:11.266 [2024-11-06 13:56:05.105479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.266 [2024-11-06 13:56:05.140336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.266 [2024-11-06 13:56:05.140369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:11.266 [2024-11-06 13:56:05.140381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.783 ms 00:30:11.266 [2024-11-06 13:56:05.140407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.266 [2024-11-06 13:56:05.140443] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:11.266 [2024-11-06 13:56:05.140460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 112128 / 261120 wr_cnt: 1 state: open 00:30:11.266 [2024-11-06 13:56:05.140473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.140994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:11.266 [2024-11-06 13:56:05.141263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:11.267 [2024-11-06 13:56:05.141556] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:11.267 [2024-11-06 13:56:05.141566] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e50b95b7-345a-4ae3-a27b-2754588f5046 00:30:11.267 [2024-11-06 13:56:05.141577] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 112128 00:30:11.267 [2024-11-06 13:56:05.141593] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 113088 00:30:11.267 [2024-11-06 13:56:05.141613] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 112128 00:30:11.267 [2024-11-06 13:56:05.141623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:30:11.267 [2024-11-06 13:56:05.141633] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:11.267 [2024-11-06 13:56:05.141643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:11.267 [2024-11-06 13:56:05.141653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:11.267 [2024-11-06 13:56:05.141662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:11.267 [2024-11-06 13:56:05.141671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:11.267 [2024-11-06 13:56:05.141682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.267 [2024-11-06 13:56:05.141692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:11.267 [2024-11-06 13:56:05.141702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:30:11.267 [2024-11-06 13:56:05.141712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.267 [2024-11-06 13:56:05.161466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.267 [2024-11-06 13:56:05.161497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:11.267 [2024-11-06 13:56:05.161510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.719 ms 00:30:11.267 [2024-11-06 13:56:05.161519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.267 [2024-11-06 13:56:05.162044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.267 [2024-11-06 13:56:05.162071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:11.267 [2024-11-06 13:56:05.162082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:30:11.267 [2024-11-06 13:56:05.162098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.267 [2024-11-06 13:56:05.214556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.267 [2024-11-06 13:56:05.214597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.267 [2024-11-06 13:56:05.214614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.267 [2024-11-06 13:56:05.214628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.267 [2024-11-06 13:56:05.214702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.267 [2024-11-06 13:56:05.214717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.267 [2024-11-06 13:56:05.214730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.267 [2024-11-06 13:56:05.214747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.267 [2024-11-06 13:56:05.214816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.267 [2024-11-06 13:56:05.214832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.267 [2024-11-06 13:56:05.214845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.267 [2024-11-06 13:56:05.214859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.267 [2024-11-06 13:56:05.214878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.267 [2024-11-06 13:56:05.214892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.267 [2024-11-06 13:56:05.214905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.267 [2024-11-06 13:56:05.214918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.339036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.339086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.527 [2024-11-06 13:56:05.339102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.339129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.438320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.438379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.527 [2024-11-06 13:56:05.438394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.438405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.438520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.438533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.527 [2024-11-06 13:56:05.438544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.438554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.438592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.438603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.527 [2024-11-06 13:56:05.438614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.438624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.438749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.438764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.527 [2024-11-06 13:56:05.438775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.438785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.438821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.438833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:11.527 [2024-11-06 13:56:05.438843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.438853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.438889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.438905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.527 [2024-11-06 13:56:05.438915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.438926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.438971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:11.527 [2024-11-06 13:56:05.438983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.527 [2024-11-06 13:56:05.438993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:11.527 [2024-11-06 13:56:05.439004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.527 [2024-11-06 13:56:05.439154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 649.378 ms, result 0 00:30:13.433 00:30:13.433 00:30:13.433 13:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:15.338 13:56:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:15.338 [2024-11-06 13:56:09.185500] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:30:15.338 [2024-11-06 13:56:09.185649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80062 ] 00:30:15.597 [2024-11-06 13:56:09.373171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.597 [2024-11-06 13:56:09.530539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.167 [2024-11-06 13:56:09.900914] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.167 [2024-11-06 13:56:09.900982] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.167 [2024-11-06 13:56:10.068852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.069116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:16.167 [2024-11-06 13:56:10.069165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:16.167 [2024-11-06 13:56:10.069186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.069293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.069317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:16.167 [2024-11-06 13:56:10.069343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:16.167 [2024-11-06 13:56:10.069363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.069407] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:16.167 [2024-11-06 13:56:10.070675] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:16.167 [2024-11-06 13:56:10.070707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.070719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:16.167 [2024-11-06 13:56:10.070731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:30:16.167 [2024-11-06 13:56:10.070757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.072271] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:16.167 [2024-11-06 13:56:10.092128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.092283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:16.167 [2024-11-06 13:56:10.092411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:30:16.167 [2024-11-06 13:56:10.092452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.092542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.092750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:16.167 [2024-11-06 13:56:10.092790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:16.167 [2024-11-06 13:56:10.092822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.099761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.099932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:16.167 [2024-11-06 13:56:10.100055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.835 ms 00:30:16.167 [2024-11-06 13:56:10.100101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.100206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.100304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:16.167 [2024-11-06 13:56:10.100345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:16.167 [2024-11-06 13:56:10.100376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.100491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.100612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:16.167 [2024-11-06 13:56:10.100696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:16.167 [2024-11-06 13:56:10.100734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.167 [2024-11-06 13:56:10.100796] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:16.167 [2024-11-06 13:56:10.105938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.167 [2024-11-06 13:56:10.106105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:16.167 [2024-11-06 13:56:10.106178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.156 ms 00:30:16.168 [2024-11-06 13:56:10.106221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.168 [2024-11-06 13:56:10.106282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.168 [2024-11-06 13:56:10.106370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:16.168 [2024-11-06 13:56:10.106410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:16.168 [2024-11-06 13:56:10.106450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.168 [2024-11-06 13:56:10.106569] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:16.168 [2024-11-06 13:56:10.106628] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:16.168 [2024-11-06 13:56:10.106853] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:16.168 [2024-11-06 13:56:10.106919] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:16.168 [2024-11-06 13:56:10.107064] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:16.168 [2024-11-06 13:56:10.107119] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:16.168 [2024-11-06 13:56:10.107171] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:16.168 [2024-11-06 13:56:10.107224] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:16.168 [2024-11-06 13:56:10.107373] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:16.168 [2024-11-06 13:56:10.107495] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:16.168 [2024-11-06 13:56:10.107528] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:16.168 [2024-11-06 13:56:10.107558] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:16.168 [2024-11-06 13:56:10.107595] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:16.168 [2024-11-06 13:56:10.107627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.168 [2024-11-06 13:56:10.107658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:16.168 [2024-11-06 13:56:10.107689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:30:16.168 [2024-11-06 13:56:10.107793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.168 [2024-11-06 13:56:10.107944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.168 [2024-11-06 13:56:10.108008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:16.168 [2024-11-06 13:56:10.108150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:16.168 [2024-11-06 13:56:10.108190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.168 [2024-11-06 13:56:10.108330] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:16.168 [2024-11-06 13:56:10.108516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:16.168 [2024-11-06 13:56:10.108553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.168 [2024-11-06 13:56:10.108584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.168 [2024-11-06 13:56:10.108616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:16.168 [2024-11-06 13:56:10.108647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:16.168 [2024-11-06 13:56:10.108743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:16.168 [2024-11-06 13:56:10.108812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:16.168 [2024-11-06 13:56:10.108842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:16.168 [2024-11-06 13:56:10.108872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.168 [2024-11-06 13:56:10.108902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:16.168 [2024-11-06 13:56:10.108932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:16.168 [2024-11-06 13:56:10.108962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.168 [2024-11-06 13:56:10.108992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:16.168 [2024-11-06 13:56:10.109148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:16.168 [2024-11-06 13:56:10.109197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:16.168 [2024-11-06 13:56:10.109259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:16.168 [2024-11-06 13:56:10.109289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:16.168 [2024-11-06 13:56:10.109406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.168 [2024-11-06 13:56:10.109466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:16.168 [2024-11-06 13:56:10.109496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.168 [2024-11-06 13:56:10.109602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:16.168 [2024-11-06 13:56:10.109633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.168 [2024-11-06 13:56:10.109692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:16.168 [2024-11-06 13:56:10.109764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.168 [2024-11-06 13:56:10.109788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:16.168 [2024-11-06 13:56:10.109798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.168 [2024-11-06 13:56:10.109817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:16.168 [2024-11-06 13:56:10.109826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:16.168 [2024-11-06 13:56:10.109836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.168 [2024-11-06 13:56:10.109845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:16.168 [2024-11-06 13:56:10.109855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:16.168 [2024-11-06 13:56:10.109864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:16.168 [2024-11-06 13:56:10.109882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:16.168 [2024-11-06 13:56:10.109891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109900] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:16.168 [2024-11-06 13:56:10.109911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:16.168 [2024-11-06 13:56:10.109921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.168 [2024-11-06 13:56:10.109930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.168 [2024-11-06 13:56:10.109941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:16.168 [2024-11-06 13:56:10.109951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:16.168 [2024-11-06 13:56:10.109960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:16.168 [2024-11-06 13:56:10.109970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:16.168 [2024-11-06 13:56:10.109979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:16.168 [2024-11-06 13:56:10.109989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:16.168 [2024-11-06 13:56:10.110000] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:16.168 [2024-11-06 13:56:10.110013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.168 [2024-11-06 13:56:10.110037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:16.168 [2024-11-06 13:56:10.110049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:16.168 [2024-11-06 13:56:10.110059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:16.168 [2024-11-06 13:56:10.110070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:16.168 [2024-11-06 13:56:10.110081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:16.168 [2024-11-06 13:56:10.110093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:16.168 [2024-11-06 13:56:10.110104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:16.168 [2024-11-06 13:56:10.110115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:16.168 [2024-11-06 13:56:10.110126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:16.168 [2024-11-06 13:56:10.110137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:16.168 [2024-11-06 13:56:10.110148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:16.168 [2024-11-06 13:56:10.110159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:16.168 [2024-11-06 13:56:10.110170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:16.168 [2024-11-06 13:56:10.110180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:16.168 [2024-11-06 13:56:10.110191] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:16.168 [2024-11-06 13:56:10.110208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.169 [2024-11-06 13:56:10.110219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.169 [2024-11-06 13:56:10.110231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:16.169 [2024-11-06 13:56:10.110241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:16.169 [2024-11-06 13:56:10.110252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:16.169 [2024-11-06 13:56:10.110265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.169 [2024-11-06 13:56:10.110276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:16.169 [2024-11-06 13:56:10.110290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.996 ms 00:30:16.169 [2024-11-06 13:56:10.110301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.152839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.152890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:16.428 [2024-11-06 13:56:10.152906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.466 ms 00:30:16.428 [2024-11-06 13:56:10.152917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.153032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.153044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:16.428 [2024-11-06 13:56:10.153056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:16.428 [2024-11-06 13:56:10.153066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.209971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.210041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:16.428 [2024-11-06 13:56:10.210056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.817 ms 00:30:16.428 [2024-11-06 13:56:10.210067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.210119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.210130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:16.428 [2024-11-06 13:56:10.210146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:16.428 [2024-11-06 13:56:10.210156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.210683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.210703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:16.428 [2024-11-06 13:56:10.210714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:30:16.428 [2024-11-06 13:56:10.210724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.210841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.210854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:16.428 [2024-11-06 13:56:10.210865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:16.428 [2024-11-06 13:56:10.210881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.230109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.230279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:16.428 [2024-11-06 13:56:10.230322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.207 ms 00:30:16.428 [2024-11-06 13:56:10.230333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.249872] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:16.428 [2024-11-06 13:56:10.249921] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:16.428 [2024-11-06 13:56:10.249937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.249948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:16.428 [2024-11-06 13:56:10.249959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.469 ms 00:30:16.428 [2024-11-06 13:56:10.249970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.279638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.279681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:16.428 [2024-11-06 13:56:10.279696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.625 ms 00:30:16.428 [2024-11-06 13:56:10.279707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.297812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.297871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:16.428 [2024-11-06 13:56:10.297900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.059 ms 00:30:16.428 [2024-11-06 13:56:10.297911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.315633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.315684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:16.428 [2024-11-06 13:56:10.315697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.683 ms 00:30:16.428 [2024-11-06 13:56:10.315706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.316519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.316547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:16.428 [2024-11-06 13:56:10.316559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:30:16.428 [2024-11-06 13:56:10.316573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.428 [2024-11-06 13:56:10.403581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.428 [2024-11-06 13:56:10.403778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:16.428 [2024-11-06 13:56:10.403827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.986 ms 00:30:16.428 [2024-11-06 13:56:10.403838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.415177] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:16.688 [2024-11-06 13:56:10.418182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.688 [2024-11-06 13:56:10.418211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:16.688 [2024-11-06 13:56:10.418225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.253 ms 00:30:16.688 [2024-11-06 13:56:10.418234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.418327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.688 [2024-11-06 13:56:10.418340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:16.688 [2024-11-06 13:56:10.418357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:16.688 [2024-11-06 13:56:10.418370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.419961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.688 [2024-11-06 13:56:10.419996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:16.688 [2024-11-06 13:56:10.420008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:30:16.688 [2024-11-06 13:56:10.420033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.420070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.688 [2024-11-06 13:56:10.420082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:16.688 [2024-11-06 13:56:10.420093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:16.688 [2024-11-06 13:56:10.420103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.420142] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:16.688 [2024-11-06 13:56:10.420154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.688 [2024-11-06 13:56:10.420165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:16.688 [2024-11-06 13:56:10.420175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:16.688 [2024-11-06 13:56:10.420185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.456306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.688 [2024-11-06 13:56:10.456342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:16.688 [2024-11-06 13:56:10.456355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.102 ms 00:30:16.688 [2024-11-06 13:56:10.456371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.456446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.688 [2024-11-06 13:56:10.456459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:16.688 [2024-11-06 13:56:10.456469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:16.688 [2024-11-06 13:56:10.456479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.688 [2024-11-06 13:56:10.457596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.294 ms, result 0 00:30:18.065  [2024-11-06T13:56:13.039Z] Copying: 1128/1048576 [kB] (1128 kBps) [2024-11-06T13:56:13.975Z] Copying: 9200/1048576 [kB] (8072 kBps) [2024-11-06T13:56:14.912Z] Copying: 48/1024 [MB] (39 MBps) [2024-11-06T13:56:15.848Z] Copying: 87/1024 [MB] (39 MBps) [2024-11-06T13:56:16.785Z] Copying: 127/1024 [MB] (39 MBps) [2024-11-06T13:56:17.722Z] Copying: 167/1024 [MB] (39 MBps) [2024-11-06T13:56:19.099Z] Copying: 207/1024 [MB] (39 MBps) [2024-11-06T13:56:20.036Z] Copying: 247/1024 [MB] (39 MBps) [2024-11-06T13:56:20.973Z] Copying: 288/1024 [MB] (40 MBps) [2024-11-06T13:56:21.909Z] Copying: 327/1024 [MB] (39 MBps) [2024-11-06T13:56:22.940Z] Copying: 368/1024 [MB] (41 MBps) [2024-11-06T13:56:23.875Z] Copying: 408/1024 [MB] (39 MBps) [2024-11-06T13:56:24.811Z] Copying: 448/1024 [MB] (39 MBps) [2024-11-06T13:56:25.746Z] Copying: 489/1024 [MB] (40 MBps) [2024-11-06T13:56:27.123Z] Copying: 527/1024 [MB] (38 MBps) [2024-11-06T13:56:27.691Z] Copying: 566/1024 [MB] (38 MBps) [2024-11-06T13:56:29.133Z] Copying: 605/1024 [MB] (39 MBps) [2024-11-06T13:56:29.699Z] Copying: 644/1024 [MB] (38 MBps) [2024-11-06T13:56:31.075Z] Copying: 683/1024 [MB] (39 MBps) [2024-11-06T13:56:32.010Z] Copying: 723/1024 [MB] (40 MBps) [2024-11-06T13:56:32.946Z] Copying: 762/1024 [MB] (38 MBps) [2024-11-06T13:56:33.883Z] Copying: 802/1024 [MB] (39 MBps) [2024-11-06T13:56:34.820Z] Copying: 841/1024 [MB] (39 MBps) [2024-11-06T13:56:35.755Z] Copying: 880/1024 [MB] (38 MBps) [2024-11-06T13:56:36.691Z] Copying: 919/1024 [MB] (39 MBps) [2024-11-06T13:56:38.066Z] Copying: 959/1024 [MB] (39 MBps) [2024-11-06T13:56:38.633Z] Copying: 998/1024 [MB] (38 MBps) [2024-11-06T13:56:38.634Z] Copying: 1024/1024 [MB] (average 36 MBps)[2024-11-06 13:56:38.431626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.431726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:44.651 [2024-11-06 13:56:38.431753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:44.651 [2024-11-06 13:56:38.431773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.431809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:44.651 [2024-11-06 13:56:38.437061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.437102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:44.651 [2024-11-06 13:56:38.437121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.224 ms 00:30:44.651 [2024-11-06 13:56:38.437138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.437421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.437446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:44.651 [2024-11-06 13:56:38.437470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:30:44.651 [2024-11-06 13:56:38.437487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.448520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.448683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:44.651 [2024-11-06 13:56:38.448781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.004 ms 00:30:44.651 [2024-11-06 13:56:38.448826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.454640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.454668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:44.651 [2024-11-06 13:56:38.454694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.641 ms 00:30:44.651 [2024-11-06 13:56:38.454704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.491326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.491463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:44.651 [2024-11-06 13:56:38.491541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.556 ms 00:30:44.651 [2024-11-06 13:56:38.491577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.512445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.512605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:44.651 [2024-11-06 13:56:38.512694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.811 ms 00:30:44.651 [2024-11-06 13:56:38.512731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.514194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.514335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:44.651 [2024-11-06 13:56:38.514424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:30:44.651 [2024-11-06 13:56:38.514478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.551222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.551384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:44.651 [2024-11-06 13:56:38.551514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.627 ms 00:30:44.651 [2024-11-06 13:56:38.551551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.587103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.587233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:44.651 [2024-11-06 13:56:38.587373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.490 ms 00:30:44.651 [2024-11-06 13:56:38.587410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.651 [2024-11-06 13:56:38.622028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.651 [2024-11-06 13:56:38.622152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:44.651 [2024-11-06 13:56:38.622188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.561 ms 00:30:44.651 [2024-11-06 13:56:38.622198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.910 [2024-11-06 13:56:38.658200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.910 [2024-11-06 13:56:38.658234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:44.910 [2024-11-06 13:56:38.658247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.894 ms 00:30:44.910 [2024-11-06 13:56:38.658273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.910 [2024-11-06 13:56:38.658311] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:44.910 [2024-11-06 13:56:38.658328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:44.910 [2024-11-06 13:56:38.658341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:44.911 [2024-11-06 13:56:38.658352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.658997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:44.911 [2024-11-06 13:56:38.659333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:44.912 [2024-11-06 13:56:38.659445] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:44.912 [2024-11-06 13:56:38.659455] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e50b95b7-345a-4ae3-a27b-2754588f5046 00:30:44.912 [2024-11-06 13:56:38.659466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:44.912 [2024-11-06 13:56:38.659477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 152512 00:30:44.912 [2024-11-06 13:56:38.659486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 150528 00:30:44.912 [2024-11-06 13:56:38.659502] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0132 00:30:44.912 [2024-11-06 13:56:38.659512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:44.912 [2024-11-06 13:56:38.659522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:44.912 [2024-11-06 13:56:38.659532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:44.912 [2024-11-06 13:56:38.659551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:44.912 [2024-11-06 13:56:38.659561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:44.912 [2024-11-06 13:56:38.659570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.912 [2024-11-06 13:56:38.659581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:44.912 [2024-11-06 13:56:38.659591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:30:44.912 [2024-11-06 13:56:38.659606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.912 [2024-11-06 13:56:38.680183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.912 [2024-11-06 13:56:38.680221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:44.912 [2024-11-06 13:56:38.680234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.540 ms 00:30:44.912 [2024-11-06 13:56:38.680244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.912 [2024-11-06 13:56:38.680877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.912 [2024-11-06 13:56:38.680900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:44.912 [2024-11-06 13:56:38.680912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:30:44.912 [2024-11-06 13:56:38.680922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.912 [2024-11-06 13:56:38.733515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:44.912 [2024-11-06 13:56:38.733551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:44.912 [2024-11-06 13:56:38.733563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:44.912 [2024-11-06 13:56:38.733589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.912 [2024-11-06 13:56:38.733647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:44.912 [2024-11-06 13:56:38.733658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:44.912 [2024-11-06 13:56:38.733668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:44.912 [2024-11-06 13:56:38.733678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.912 [2024-11-06 13:56:38.733754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:44.912 [2024-11-06 13:56:38.733767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:44.912 [2024-11-06 13:56:38.733778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:44.912 [2024-11-06 13:56:38.733788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.912 [2024-11-06 13:56:38.733805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:44.912 [2024-11-06 13:56:38.733815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:44.912 [2024-11-06 13:56:38.733825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:44.912 [2024-11-06 13:56:38.733835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.912 [2024-11-06 13:56:38.859006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:44.912 [2024-11-06 13:56:38.859201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:44.912 [2024-11-06 13:56:38.859224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:44.912 [2024-11-06 13:56:38.859236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.959150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.171 [2024-11-06 13:56:38.959327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:45.171 [2024-11-06 13:56:38.959404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.171 [2024-11-06 13:56:38.959442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.959564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.171 [2024-11-06 13:56:38.959609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:45.171 [2024-11-06 13:56:38.959655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.171 [2024-11-06 13:56:38.959686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.959753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.171 [2024-11-06 13:56:38.959787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:45.171 [2024-11-06 13:56:38.959916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.171 [2024-11-06 13:56:38.959953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.960126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.171 [2024-11-06 13:56:38.960179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:45.171 [2024-11-06 13:56:38.960302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.171 [2024-11-06 13:56:38.960339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.960417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.171 [2024-11-06 13:56:38.960465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:45.171 [2024-11-06 13:56:38.960497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.171 [2024-11-06 13:56:38.960572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.960642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.171 [2024-11-06 13:56:38.960792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:45.171 [2024-11-06 13:56:38.960829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.171 [2024-11-06 13:56:38.960865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.960939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.171 [2024-11-06 13:56:38.961077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:45.171 [2024-11-06 13:56:38.961116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.171 [2024-11-06 13:56:38.961148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.171 [2024-11-06 13:56:38.961302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.646 ms, result 0 00:30:46.107 00:30:46.107 00:30:46.107 13:56:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:48.665 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:48.665 13:56:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:48.665 [2024-11-06 13:56:42.129737] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:30:48.665 [2024-11-06 13:56:42.130092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80392 ] 00:30:48.665 [2024-11-06 13:56:42.309679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.665 [2024-11-06 13:56:42.466680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.923 [2024-11-06 13:56:42.821678] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:48.923 [2024-11-06 13:56:42.821744] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:49.181 [2024-11-06 13:56:42.983928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:42.984186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:49.181 [2024-11-06 13:56:42.984220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:49.181 [2024-11-06 13:56:42.984231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:42.984291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:42.984304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:49.181 [2024-11-06 13:56:42.984319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:49.181 [2024-11-06 13:56:42.984329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:42.984351] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:49.181 [2024-11-06 13:56:42.985397] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:49.181 [2024-11-06 13:56:42.985424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:42.985436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:49.181 [2024-11-06 13:56:42.985447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:30:49.181 [2024-11-06 13:56:42.985456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:42.986873] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:49.181 [2024-11-06 13:56:43.006170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:43.006344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:49.181 [2024-11-06 13:56:43.006366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.298 ms 00:30:49.181 [2024-11-06 13:56:43.006378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:43.006454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:43.006469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:49.181 [2024-11-06 13:56:43.006480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:49.181 [2024-11-06 13:56:43.006490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:43.013259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:43.013427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:49.181 [2024-11-06 13:56:43.013447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.693 ms 00:30:49.181 [2024-11-06 13:56:43.013465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:43.013547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:43.013560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:49.181 [2024-11-06 13:56:43.013571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:49.181 [2024-11-06 13:56:43.013581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:43.013625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:43.013637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:49.181 [2024-11-06 13:56:43.013648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:49.181 [2024-11-06 13:56:43.013658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:43.013688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:49.181 [2024-11-06 13:56:43.018590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:43.018623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:49.181 [2024-11-06 13:56:43.018636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.912 ms 00:30:49.181 [2024-11-06 13:56:43.018650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:43.018681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.181 [2024-11-06 13:56:43.018692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:49.181 [2024-11-06 13:56:43.018702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:49.181 [2024-11-06 13:56:43.018713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.181 [2024-11-06 13:56:43.018766] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:49.181 [2024-11-06 13:56:43.018789] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:49.181 [2024-11-06 13:56:43.018826] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:49.181 [2024-11-06 13:56:43.018847] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:49.181 [2024-11-06 13:56:43.018937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:49.181 [2024-11-06 13:56:43.018950] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:49.181 [2024-11-06 13:56:43.018963] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:49.182 [2024-11-06 13:56:43.018976] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:49.182 [2024-11-06 13:56:43.018988] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019000] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:49.182 [2024-11-06 13:56:43.019010] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:49.182 [2024-11-06 13:56:43.019039] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:49.182 [2024-11-06 13:56:43.019053] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:49.182 [2024-11-06 13:56:43.019064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.019075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:49.182 [2024-11-06 13:56:43.019086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:30:49.182 [2024-11-06 13:56:43.019096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.019169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.019180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:49.182 [2024-11-06 13:56:43.019190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:49.182 [2024-11-06 13:56:43.019200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.019300] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:49.182 [2024-11-06 13:56:43.019315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:49.182 [2024-11-06 13:56:43.019326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:49.182 [2024-11-06 13:56:43.019357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:49.182 [2024-11-06 13:56:43.019386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.182 [2024-11-06 13:56:43.019405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:49.182 [2024-11-06 13:56:43.019414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:49.182 [2024-11-06 13:56:43.019423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.182 [2024-11-06 13:56:43.019433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:49.182 [2024-11-06 13:56:43.019442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:49.182 [2024-11-06 13:56:43.019461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:49.182 [2024-11-06 13:56:43.019479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:49.182 [2024-11-06 13:56:43.019508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:49.182 [2024-11-06 13:56:43.019536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:49.182 [2024-11-06 13:56:43.019564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:49.182 [2024-11-06 13:56:43.019591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:49.182 [2024-11-06 13:56:43.019619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.182 [2024-11-06 13:56:43.019638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:49.182 [2024-11-06 13:56:43.019647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:49.182 [2024-11-06 13:56:43.019656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.182 [2024-11-06 13:56:43.019665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:49.182 [2024-11-06 13:56:43.019674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:49.182 [2024-11-06 13:56:43.019683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:49.182 [2024-11-06 13:56:43.019701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:49.182 [2024-11-06 13:56:43.019711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019720] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:49.182 [2024-11-06 13:56:43.019730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:49.182 [2024-11-06 13:56:43.019740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.182 [2024-11-06 13:56:43.019761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:49.182 [2024-11-06 13:56:43.019770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:49.182 [2024-11-06 13:56:43.019780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:49.182 [2024-11-06 13:56:43.019789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:49.182 [2024-11-06 13:56:43.019798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:49.182 [2024-11-06 13:56:43.019807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:49.182 [2024-11-06 13:56:43.019818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:49.182 [2024-11-06 13:56:43.019830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.182 [2024-11-06 13:56:43.019842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:49.182 [2024-11-06 13:56:43.019852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:49.182 [2024-11-06 13:56:43.019862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:49.182 [2024-11-06 13:56:43.019873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:49.182 [2024-11-06 13:56:43.019883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:49.182 [2024-11-06 13:56:43.019893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:49.182 [2024-11-06 13:56:43.019903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:49.182 [2024-11-06 13:56:43.019914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:49.182 [2024-11-06 13:56:43.019924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:49.182 [2024-11-06 13:56:43.019934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:49.182 [2024-11-06 13:56:43.019945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:49.182 [2024-11-06 13:56:43.019955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:49.182 [2024-11-06 13:56:43.019965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:49.182 [2024-11-06 13:56:43.019975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:49.182 [2024-11-06 13:56:43.019985] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:49.182 [2024-11-06 13:56:43.020000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.182 [2024-11-06 13:56:43.020011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:49.182 [2024-11-06 13:56:43.020033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:49.182 [2024-11-06 13:56:43.020044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:49.182 [2024-11-06 13:56:43.020059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:49.182 [2024-11-06 13:56:43.020070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.020081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:49.182 [2024-11-06 13:56:43.020091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:30:49.182 [2024-11-06 13:56:43.020101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.058213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.058412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:49.182 [2024-11-06 13:56:43.058459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.064 ms 00:30:49.182 [2024-11-06 13:56:43.058471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.058565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.058576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:49.182 [2024-11-06 13:56:43.058587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:49.182 [2024-11-06 13:56:43.058597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.116561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.116598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:49.182 [2024-11-06 13:56:43.116612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.896 ms 00:30:49.182 [2024-11-06 13:56:43.116638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.116678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.116689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:49.182 [2024-11-06 13:56:43.116704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:49.182 [2024-11-06 13:56:43.116714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.117234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.117249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:49.182 [2024-11-06 13:56:43.117261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:30:49.182 [2024-11-06 13:56:43.117271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.117390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.117404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:49.182 [2024-11-06 13:56:43.117415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:30:49.182 [2024-11-06 13:56:43.117431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.135739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.135773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:49.182 [2024-11-06 13:56:43.135790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.287 ms 00:30:49.182 [2024-11-06 13:56:43.135800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.182 [2024-11-06 13:56:43.154846] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:49.182 [2024-11-06 13:56:43.155004] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:49.182 [2024-11-06 13:56:43.155037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.182 [2024-11-06 13:56:43.155049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:49.182 [2024-11-06 13:56:43.155061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.137 ms 00:30:49.182 [2024-11-06 13:56:43.155070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.184648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.184701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:49.442 [2024-11-06 13:56:43.184715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.538 ms 00:30:49.442 [2024-11-06 13:56:43.184726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.202708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.202744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:49.442 [2024-11-06 13:56:43.202757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.937 ms 00:30:49.442 [2024-11-06 13:56:43.202767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.220816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.220850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:49.442 [2024-11-06 13:56:43.220862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.011 ms 00:30:49.442 [2024-11-06 13:56:43.220888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.221752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.221786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:49.442 [2024-11-06 13:56:43.221799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:30:49.442 [2024-11-06 13:56:43.221814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.310267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.310318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:49.442 [2024-11-06 13:56:43.310357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.429 ms 00:30:49.442 [2024-11-06 13:56:43.310367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.321082] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:49.442 [2024-11-06 13:56:43.324119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.324266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:49.442 [2024-11-06 13:56:43.324305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.698 ms 00:30:49.442 [2024-11-06 13:56:43.324316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.324416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.324429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:49.442 [2024-11-06 13:56:43.324441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:49.442 [2024-11-06 13:56:43.324455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.325345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.325367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:49.442 [2024-11-06 13:56:43.325379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:30:49.442 [2024-11-06 13:56:43.325389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.325417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.325429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:49.442 [2024-11-06 13:56:43.325439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:49.442 [2024-11-06 13:56:43.325449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.325488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:49.442 [2024-11-06 13:56:43.325501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.325511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:49.442 [2024-11-06 13:56:43.325522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:49.442 [2024-11-06 13:56:43.325532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.362934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.363084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:49.442 [2024-11-06 13:56:43.363107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.382 ms 00:30:49.442 [2024-11-06 13:56:43.363125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.363243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.442 [2024-11-06 13:56:43.363258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:49.442 [2024-11-06 13:56:43.363270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:49.442 [2024-11-06 13:56:43.363280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.442 [2024-11-06 13:56:43.364452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.979 ms, result 0 00:30:50.818  [2024-11-06T13:56:45.734Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-06T13:56:46.667Z] Copying: 65/1024 [MB] (33 MBps) [2024-11-06T13:56:47.601Z] Copying: 98/1024 [MB] (33 MBps) [2024-11-06T13:56:48.975Z] Copying: 129/1024 [MB] (30 MBps) [2024-11-06T13:56:49.909Z] Copying: 161/1024 [MB] (31 MBps) [2024-11-06T13:56:50.596Z] Copying: 193/1024 [MB] (32 MBps) [2024-11-06T13:56:51.590Z] Copying: 227/1024 [MB] (33 MBps) [2024-11-06T13:56:52.966Z] Copying: 259/1024 [MB] (32 MBps) [2024-11-06T13:56:53.903Z] Copying: 293/1024 [MB] (33 MBps) [2024-11-06T13:56:54.839Z] Copying: 324/1024 [MB] (31 MBps) [2024-11-06T13:56:55.776Z] Copying: 357/1024 [MB] (32 MBps) [2024-11-06T13:56:56.713Z] Copying: 389/1024 [MB] (32 MBps) [2024-11-06T13:56:57.650Z] Copying: 420/1024 [MB] (30 MBps) [2024-11-06T13:56:58.587Z] Copying: 451/1024 [MB] (31 MBps) [2024-11-06T13:56:59.963Z] Copying: 484/1024 [MB] (32 MBps) [2024-11-06T13:57:00.935Z] Copying: 516/1024 [MB] (31 MBps) [2024-11-06T13:57:01.872Z] Copying: 549/1024 [MB] (33 MBps) [2024-11-06T13:57:02.809Z] Copying: 582/1024 [MB] (33 MBps) [2024-11-06T13:57:03.749Z] Copying: 616/1024 [MB] (33 MBps) [2024-11-06T13:57:04.684Z] Copying: 648/1024 [MB] (32 MBps) [2024-11-06T13:57:05.619Z] Copying: 680/1024 [MB] (32 MBps) [2024-11-06T13:57:06.995Z] Copying: 710/1024 [MB] (30 MBps) [2024-11-06T13:57:07.931Z] Copying: 742/1024 [MB] (31 MBps) [2024-11-06T13:57:08.868Z] Copying: 773/1024 [MB] (31 MBps) [2024-11-06T13:57:09.806Z] Copying: 806/1024 [MB] (32 MBps) [2024-11-06T13:57:10.740Z] Copying: 838/1024 [MB] (31 MBps) [2024-11-06T13:57:11.674Z] Copying: 868/1024 [MB] (29 MBps) [2024-11-06T13:57:12.608Z] Copying: 895/1024 [MB] (27 MBps) [2024-11-06T13:57:14.022Z] Copying: 923/1024 [MB] (27 MBps) [2024-11-06T13:57:14.588Z] Copying: 952/1024 [MB] (28 MBps) [2024-11-06T13:57:15.963Z] Copying: 983/1024 [MB] (31 MBps) [2024-11-06T13:57:15.963Z] Copying: 1015/1024 [MB] (31 MBps) [2024-11-06T13:57:16.222Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-06 13:57:16.000776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.000853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:22.239 [2024-11-06 13:57:16.000875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:22.239 [2024-11-06 13:57:16.000890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.000922] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:22.239 [2024-11-06 13:57:16.007287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.007324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:22.239 [2024-11-06 13:57:16.007343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:31:22.239 [2024-11-06 13:57:16.007354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.007561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.007574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:22.239 [2024-11-06 13:57:16.007585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:31:22.239 [2024-11-06 13:57:16.007596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.010349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.010371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:22.239 [2024-11-06 13:57:16.010383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:31:22.239 [2024-11-06 13:57:16.010393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.015531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.015560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:22.239 [2024-11-06 13:57:16.015573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.113 ms 00:31:22.239 [2024-11-06 13:57:16.015583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.053799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.053838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:22.239 [2024-11-06 13:57:16.053853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.148 ms 00:31:22.239 [2024-11-06 13:57:16.053864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.075833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.075873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:22.239 [2024-11-06 13:57:16.075888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.928 ms 00:31:22.239 [2024-11-06 13:57:16.075899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.078289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.078334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:22.239 [2024-11-06 13:57:16.078349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.346 ms 00:31:22.239 [2024-11-06 13:57:16.078360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.116942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.116983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:22.239 [2024-11-06 13:57:16.116998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.562 ms 00:31:22.239 [2024-11-06 13:57:16.117009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.154424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.154475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:22.239 [2024-11-06 13:57:16.154489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.362 ms 00:31:22.239 [2024-11-06 13:57:16.154514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.239 [2024-11-06 13:57:16.191300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.239 [2024-11-06 13:57:16.191337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:22.239 [2024-11-06 13:57:16.191351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.747 ms 00:31:22.239 [2024-11-06 13:57:16.191361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.499 [2024-11-06 13:57:16.227565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.499 [2024-11-06 13:57:16.227601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:22.499 [2024-11-06 13:57:16.227615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.119 ms 00:31:22.499 [2024-11-06 13:57:16.227625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.499 [2024-11-06 13:57:16.227662] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:22.499 [2024-11-06 13:57:16.227680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:22.499 [2024-11-06 13:57:16.227700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:22.499 [2024-11-06 13:57:16.227712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.227992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:22.499 [2024-11-06 13:57:16.228277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:22.500 [2024-11-06 13:57:16.228798] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:22.500 [2024-11-06 13:57:16.228812] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e50b95b7-345a-4ae3-a27b-2754588f5046 00:31:22.500 [2024-11-06 13:57:16.228824] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:22.500 [2024-11-06 13:57:16.228834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:22.500 [2024-11-06 13:57:16.228844] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:22.500 [2024-11-06 13:57:16.228854] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:22.500 [2024-11-06 13:57:16.228864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:22.500 [2024-11-06 13:57:16.228874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:22.500 [2024-11-06 13:57:16.228895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:22.500 [2024-11-06 13:57:16.228904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:22.500 [2024-11-06 13:57:16.228913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:22.500 [2024-11-06 13:57:16.228924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.500 [2024-11-06 13:57:16.228934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:22.500 [2024-11-06 13:57:16.228945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:31:22.500 [2024-11-06 13:57:16.228955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.500 [2024-11-06 13:57:16.249082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.500 [2024-11-06 13:57:16.249116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:22.500 [2024-11-06 13:57:16.249129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.087 ms 00:31:22.500 [2024-11-06 13:57:16.249140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.500 [2024-11-06 13:57:16.249697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.500 [2024-11-06 13:57:16.249730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:22.500 [2024-11-06 13:57:16.249748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:31:22.500 [2024-11-06 13:57:16.249758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.500 [2024-11-06 13:57:16.303055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.500 [2024-11-06 13:57:16.303102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:22.500 [2024-11-06 13:57:16.303117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.500 [2024-11-06 13:57:16.303128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.500 [2024-11-06 13:57:16.303189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.500 [2024-11-06 13:57:16.303201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:22.500 [2024-11-06 13:57:16.303218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.500 [2024-11-06 13:57:16.303228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.500 [2024-11-06 13:57:16.303299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.500 [2024-11-06 13:57:16.303313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:22.500 [2024-11-06 13:57:16.303324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.500 [2024-11-06 13:57:16.303334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.500 [2024-11-06 13:57:16.303351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.500 [2024-11-06 13:57:16.303362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:22.500 [2024-11-06 13:57:16.303372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.500 [2024-11-06 13:57:16.303387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.500 [2024-11-06 13:57:16.431172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.500 [2024-11-06 13:57:16.431235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:22.500 [2024-11-06 13:57:16.431267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.500 [2024-11-06 13:57:16.431279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.535319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.760 [2024-11-06 13:57:16.535495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:22.760 [2024-11-06 13:57:16.535525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.760 [2024-11-06 13:57:16.535537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.535631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.760 [2024-11-06 13:57:16.535643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:22.760 [2024-11-06 13:57:16.535654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.760 [2024-11-06 13:57:16.535665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.535716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.760 [2024-11-06 13:57:16.535728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:22.760 [2024-11-06 13:57:16.535739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.760 [2024-11-06 13:57:16.535749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.535874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.760 [2024-11-06 13:57:16.535888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:22.760 [2024-11-06 13:57:16.535898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.760 [2024-11-06 13:57:16.535908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.535943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.760 [2024-11-06 13:57:16.535956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:22.760 [2024-11-06 13:57:16.535967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.760 [2024-11-06 13:57:16.535978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.536042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.760 [2024-11-06 13:57:16.536055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:22.760 [2024-11-06 13:57:16.536066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.760 [2024-11-06 13:57:16.536076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.536120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.760 [2024-11-06 13:57:16.536132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:22.760 [2024-11-06 13:57:16.536142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.760 [2024-11-06 13:57:16.536153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.760 [2024-11-06 13:57:16.536273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.468 ms, result 0 00:31:23.696 00:31:23.696 00:31:23.696 13:57:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:25.599 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:25.599 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:25.599 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:25.599 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:25.599 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:25.857 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:25.857 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:26.116 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:26.116 Process with pid 78675 is not found 00:31:26.116 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78675 00:31:26.116 13:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78675 ']' 00:31:26.116 13:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 78675 00:31:26.116 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78675) - No such process 00:31:26.116 13:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 78675 is not found' 00:31:26.116 13:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:26.375 Remove shared memory files 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:26.375 ************************************ 00:31:26.375 END TEST ftl_dirty_shutdown 00:31:26.375 ************************************ 00:31:26.375 00:31:26.375 real 3m19.586s 00:31:26.375 user 3m46.797s 00:31:26.375 sys 0m39.743s 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:26.375 13:57:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:26.375 13:57:20 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:26.375 13:57:20 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:26.375 13:57:20 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:26.375 13:57:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:26.375 ************************************ 00:31:26.375 START TEST ftl_upgrade_shutdown 00:31:26.375 ************************************ 00:31:26.375 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:26.375 * Looking for test storage... 00:31:26.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:26.375 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:26.375 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:31:26.375 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:26.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.637 --rc genhtml_branch_coverage=1 00:31:26.637 --rc genhtml_function_coverage=1 00:31:26.637 --rc genhtml_legend=1 00:31:26.637 --rc geninfo_all_blocks=1 00:31:26.637 --rc geninfo_unexecuted_blocks=1 00:31:26.637 00:31:26.637 ' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:26.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.637 --rc genhtml_branch_coverage=1 00:31:26.637 --rc genhtml_function_coverage=1 00:31:26.637 --rc genhtml_legend=1 00:31:26.637 --rc geninfo_all_blocks=1 00:31:26.637 --rc geninfo_unexecuted_blocks=1 00:31:26.637 00:31:26.637 ' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:26.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.637 --rc genhtml_branch_coverage=1 00:31:26.637 --rc genhtml_function_coverage=1 00:31:26.637 --rc genhtml_legend=1 00:31:26.637 --rc geninfo_all_blocks=1 00:31:26.637 --rc geninfo_unexecuted_blocks=1 00:31:26.637 00:31:26.637 ' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:26.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.637 --rc genhtml_branch_coverage=1 00:31:26.637 --rc genhtml_function_coverage=1 00:31:26.637 --rc genhtml_legend=1 00:31:26.637 --rc geninfo_all_blocks=1 00:31:26.637 --rc geninfo_unexecuted_blocks=1 00:31:26.637 00:31:26.637 ' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80848 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80848 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80848 ']' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:26.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:26.637 13:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:26.897 [2024-11-06 13:57:20.638103] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:31:26.897 [2024-11-06 13:57:20.638278] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80848 ] 00:31:26.897 [2024-11-06 13:57:20.838589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.155 [2024-11-06 13:57:21.009365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:28.091 13:57:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:31:28.350 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:28.609 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:31:28.609 { 00:31:28.609 "name": "basen1", 00:31:28.609 "aliases": [ 00:31:28.609 "c89f5b18-7713-4e22-afb8-27a0fa65321c" 00:31:28.609 ], 00:31:28.609 "product_name": "NVMe disk", 00:31:28.609 "block_size": 4096, 00:31:28.609 "num_blocks": 1310720, 00:31:28.609 "uuid": "c89f5b18-7713-4e22-afb8-27a0fa65321c", 00:31:28.609 "numa_id": -1, 00:31:28.609 "assigned_rate_limits": { 00:31:28.609 "rw_ios_per_sec": 0, 00:31:28.609 "rw_mbytes_per_sec": 0, 00:31:28.609 "r_mbytes_per_sec": 0, 00:31:28.609 "w_mbytes_per_sec": 0 00:31:28.609 }, 00:31:28.609 "claimed": true, 00:31:28.609 "claim_type": "read_many_write_one", 00:31:28.609 "zoned": false, 00:31:28.609 "supported_io_types": { 00:31:28.609 "read": true, 00:31:28.609 "write": true, 00:31:28.609 "unmap": true, 00:31:28.609 "flush": true, 00:31:28.609 "reset": true, 00:31:28.609 "nvme_admin": true, 00:31:28.609 "nvme_io": true, 00:31:28.609 "nvme_io_md": false, 00:31:28.609 "write_zeroes": true, 00:31:28.609 "zcopy": false, 00:31:28.609 "get_zone_info": false, 00:31:28.609 "zone_management": false, 00:31:28.609 "zone_append": false, 00:31:28.609 "compare": true, 00:31:28.609 "compare_and_write": false, 00:31:28.609 "abort": true, 00:31:28.609 "seek_hole": false, 00:31:28.609 "seek_data": false, 00:31:28.609 "copy": true, 00:31:28.609 "nvme_iov_md": false 00:31:28.609 }, 00:31:28.609 "driver_specific": { 00:31:28.609 "nvme": [ 00:31:28.609 { 00:31:28.609 "pci_address": "0000:00:11.0", 00:31:28.609 "trid": { 00:31:28.609 "trtype": "PCIe", 00:31:28.609 "traddr": "0000:00:11.0" 00:31:28.609 }, 00:31:28.610 "ctrlr_data": { 00:31:28.610 "cntlid": 0, 00:31:28.610 "vendor_id": "0x1b36", 00:31:28.610 "model_number": "QEMU NVMe Ctrl", 00:31:28.610 "serial_number": "12341", 00:31:28.610 "firmware_revision": "8.0.0", 00:31:28.610 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:28.610 "oacs": { 00:31:28.610 "security": 0, 00:31:28.610 "format": 1, 00:31:28.610 "firmware": 0, 00:31:28.610 "ns_manage": 1 00:31:28.610 }, 00:31:28.610 "multi_ctrlr": false, 00:31:28.610 "ana_reporting": false 00:31:28.610 }, 00:31:28.610 "vs": { 00:31:28.610 "nvme_version": "1.4" 00:31:28.610 }, 00:31:28.610 "ns_data": { 00:31:28.610 "id": 1, 00:31:28.610 "can_share": false 00:31:28.610 } 00:31:28.610 } 00:31:28.610 ], 00:31:28.610 "mp_policy": "active_passive" 00:31:28.610 } 00:31:28.610 } 00:31:28.610 ]' 00:31:28.610 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:31:28.610 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:31:28.610 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=1f3ce244-c2d0-4f04-a624-0c930b9518b4 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:28.869 13:57:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f3ce244-c2d0-4f04-a624-0c930b9518b4 00:31:29.437 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:29.437 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=e06e090e-3e62-4458-aef1-e77f24ab0d9e 00:31:29.437 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u e06e090e-3e62-4458-aef1-e77f24ab0d9e 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=9085e925-d48a-4426-a7d5-d3fb1870cd89 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 9085e925-d48a-4426-a7d5-d3fb1870cd89 ]] 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 9085e925-d48a-4426-a7d5-d3fb1870cd89 5120 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=9085e925-d48a-4426-a7d5-d3fb1870cd89 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9085e925-d48a-4426-a7d5-d3fb1870cd89 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=9085e925-d48a-4426-a7d5-d3fb1870cd89 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:31:29.696 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9085e925-d48a-4426-a7d5-d3fb1870cd89 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:31:29.955 { 00:31:29.955 "name": "9085e925-d48a-4426-a7d5-d3fb1870cd89", 00:31:29.955 "aliases": [ 00:31:29.955 "lvs/basen1p0" 00:31:29.955 ], 00:31:29.955 "product_name": "Logical Volume", 00:31:29.955 "block_size": 4096, 00:31:29.955 "num_blocks": 5242880, 00:31:29.955 "uuid": "9085e925-d48a-4426-a7d5-d3fb1870cd89", 00:31:29.955 "assigned_rate_limits": { 00:31:29.955 "rw_ios_per_sec": 0, 00:31:29.955 "rw_mbytes_per_sec": 0, 00:31:29.955 "r_mbytes_per_sec": 0, 00:31:29.955 "w_mbytes_per_sec": 0 00:31:29.955 }, 00:31:29.955 "claimed": false, 00:31:29.955 "zoned": false, 00:31:29.955 "supported_io_types": { 00:31:29.955 "read": true, 00:31:29.955 "write": true, 00:31:29.955 "unmap": true, 00:31:29.955 "flush": false, 00:31:29.955 "reset": true, 00:31:29.955 "nvme_admin": false, 00:31:29.955 "nvme_io": false, 00:31:29.955 "nvme_io_md": false, 00:31:29.955 "write_zeroes": true, 00:31:29.955 "zcopy": false, 00:31:29.955 "get_zone_info": false, 00:31:29.955 "zone_management": false, 00:31:29.955 "zone_append": false, 00:31:29.955 "compare": false, 00:31:29.955 "compare_and_write": false, 00:31:29.955 "abort": false, 00:31:29.955 "seek_hole": true, 00:31:29.955 "seek_data": true, 00:31:29.955 "copy": false, 00:31:29.955 "nvme_iov_md": false 00:31:29.955 }, 00:31:29.955 "driver_specific": { 00:31:29.955 "lvol": { 00:31:29.955 "lvol_store_uuid": "e06e090e-3e62-4458-aef1-e77f24ab0d9e", 00:31:29.955 "base_bdev": "basen1", 00:31:29.955 "thin_provision": true, 00:31:29.955 "num_allocated_clusters": 0, 00:31:29.955 "snapshot": false, 00:31:29.955 "clone": false, 00:31:29.955 "esnap_clone": false 00:31:29.955 } 00:31:29.955 } 00:31:29.955 } 00:31:29.955 ]' 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:29.955 13:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:30.214 13:57:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:30.214 13:57:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:30.214 13:57:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:30.473 13:57:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:30.473 13:57:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:30.473 13:57:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 9085e925-d48a-4426-a7d5-d3fb1870cd89 -c cachen1p0 --l2p_dram_limit 2 00:31:30.732 [2024-11-06 13:57:24.672625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.732 [2024-11-06 13:57:24.672681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:30.732 [2024-11-06 13:57:24.672701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:30.732 [2024-11-06 13:57:24.672713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.732 [2024-11-06 13:57:24.672779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.732 [2024-11-06 13:57:24.672792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:30.732 [2024-11-06 13:57:24.672805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:30.733 [2024-11-06 13:57:24.672816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.672840] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:30.733 [2024-11-06 13:57:24.673858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:30.733 [2024-11-06 13:57:24.673891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.673903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:30.733 [2024-11-06 13:57:24.673917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.053 ms 00:31:30.733 [2024-11-06 13:57:24.673927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.674013] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 0677a2d7-978c-45f6-8156-1b7732eb1581 00:31:30.733 [2024-11-06 13:57:24.675493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.675525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:30.733 [2024-11-06 13:57:24.675538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:30.733 [2024-11-06 13:57:24.675552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.682986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.683031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:30.733 [2024-11-06 13:57:24.683044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.382 ms 00:31:30.733 [2024-11-06 13:57:24.683057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.683104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.683120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:30.733 [2024-11-06 13:57:24.683132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:30.733 [2024-11-06 13:57:24.683148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.683207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.683222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:30.733 [2024-11-06 13:57:24.683233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:30.733 [2024-11-06 13:57:24.683250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.683276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:30.733 [2024-11-06 13:57:24.688405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.688436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:30.733 [2024-11-06 13:57:24.688469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.133 ms 00:31:30.733 [2024-11-06 13:57:24.688479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.688510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.688521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:30.733 [2024-11-06 13:57:24.688534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:30.733 [2024-11-06 13:57:24.688544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.688591] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:30.733 [2024-11-06 13:57:24.688727] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:30.733 [2024-11-06 13:57:24.688747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:30.733 [2024-11-06 13:57:24.688761] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:30.733 [2024-11-06 13:57:24.688777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:30.733 [2024-11-06 13:57:24.688790] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:30.733 [2024-11-06 13:57:24.688803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:30.733 [2024-11-06 13:57:24.688813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:30.733 [2024-11-06 13:57:24.688828] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:30.733 [2024-11-06 13:57:24.688838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:30.733 [2024-11-06 13:57:24.688851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.688861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:30.733 [2024-11-06 13:57:24.688875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:31:30.733 [2024-11-06 13:57:24.688885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.688962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.733 [2024-11-06 13:57:24.688972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:30.733 [2024-11-06 13:57:24.688987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:30.733 [2024-11-06 13:57:24.689008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.733 [2024-11-06 13:57:24.689116] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:30.733 [2024-11-06 13:57:24.689129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:30.733 [2024-11-06 13:57:24.689149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:30.733 [2024-11-06 13:57:24.689160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:30.733 [2024-11-06 13:57:24.689182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:30.733 [2024-11-06 13:57:24.689203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:30.733 [2024-11-06 13:57:24.689215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:30.733 [2024-11-06 13:57:24.689225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:30.733 [2024-11-06 13:57:24.689245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:30.733 [2024-11-06 13:57:24.689258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:30.733 [2024-11-06 13:57:24.689279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:30.733 [2024-11-06 13:57:24.689288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:30.733 [2024-11-06 13:57:24.689311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:30.733 [2024-11-06 13:57:24.689324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:30.733 [2024-11-06 13:57:24.689346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:30.733 [2024-11-06 13:57:24.689355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.733 [2024-11-06 13:57:24.689367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:30.733 [2024-11-06 13:57:24.689376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:30.733 [2024-11-06 13:57:24.689387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.733 [2024-11-06 13:57:24.689396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:30.733 [2024-11-06 13:57:24.689408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:30.733 [2024-11-06 13:57:24.689418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.733 [2024-11-06 13:57:24.689429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:30.733 [2024-11-06 13:57:24.689438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:30.733 [2024-11-06 13:57:24.689450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:30.733 [2024-11-06 13:57:24.689459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:30.733 [2024-11-06 13:57:24.689473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:30.733 [2024-11-06 13:57:24.689482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:30.733 [2024-11-06 13:57:24.689503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:30.733 [2024-11-06 13:57:24.689514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:30.733 [2024-11-06 13:57:24.689534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.733 [2024-11-06 13:57:24.689556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:30.733 [2024-11-06 13:57:24.689565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:30.733 [2024-11-06 13:57:24.689576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.734 [2024-11-06 13:57:24.689585] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:30.734 [2024-11-06 13:57:24.689599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:30.734 [2024-11-06 13:57:24.689609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:30.734 [2024-11-06 13:57:24.689623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:30.734 [2024-11-06 13:57:24.689633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:30.734 [2024-11-06 13:57:24.689648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:30.734 [2024-11-06 13:57:24.689657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:30.734 [2024-11-06 13:57:24.689669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:30.734 [2024-11-06 13:57:24.689678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:30.734 [2024-11-06 13:57:24.689689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:30.734 [2024-11-06 13:57:24.689703] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:30.734 [2024-11-06 13:57:24.689718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:30.734 [2024-11-06 13:57:24.689746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:30.734 [2024-11-06 13:57:24.689779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:30.734 [2024-11-06 13:57:24.689791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:30.734 [2024-11-06 13:57:24.689802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:30.734 [2024-11-06 13:57:24.689815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:30.734 [2024-11-06 13:57:24.689899] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:30.734 [2024-11-06 13:57:24.689913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:30.734 [2024-11-06 13:57:24.689938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:30.734 [2024-11-06 13:57:24.689948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:30.734 [2024-11-06 13:57:24.689961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:30.734 [2024-11-06 13:57:24.689971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.734 [2024-11-06 13:57:24.689985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:30.734 [2024-11-06 13:57:24.689995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:31:30.734 [2024-11-06 13:57:24.690008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.734 [2024-11-06 13:57:24.690058] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:30.734 [2024-11-06 13:57:24.690076] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:33.292 [2024-11-06 13:57:27.100438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.100548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:33.292 [2024-11-06 13:57:27.100571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2410.360 ms 00:31:33.292 [2024-11-06 13:57:27.100588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.151633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.151709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:33.292 [2024-11-06 13:57:27.151729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.625 ms 00:31:33.292 [2024-11-06 13:57:27.151744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.151903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.151921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:33.292 [2024-11-06 13:57:27.151933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:33.292 [2024-11-06 13:57:27.151957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.207560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.207630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:33.292 [2024-11-06 13:57:27.207648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.555 ms 00:31:33.292 [2024-11-06 13:57:27.207663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.207737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.207759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:33.292 [2024-11-06 13:57:27.207772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:33.292 [2024-11-06 13:57:27.207787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.208672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.208699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:33.292 [2024-11-06 13:57:27.208711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.768 ms 00:31:33.292 [2024-11-06 13:57:27.208725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.208784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.208800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:33.292 [2024-11-06 13:57:27.208816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:33.292 [2024-11-06 13:57:27.208834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.235341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.235414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:33.292 [2024-11-06 13:57:27.235432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.480 ms 00:31:33.292 [2024-11-06 13:57:27.235446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.292 [2024-11-06 13:57:27.266766] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:33.292 [2024-11-06 13:57:27.268614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.292 [2024-11-06 13:57:27.268641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:33.292 [2024-11-06 13:57:27.268662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.000 ms 00:31:33.292 [2024-11-06 13:57:27.268673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.551 [2024-11-06 13:57:27.304149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.551 [2024-11-06 13:57:27.304230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:33.551 [2024-11-06 13:57:27.304253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.389 ms 00:31:33.551 [2024-11-06 13:57:27.304265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.551 [2024-11-06 13:57:27.304394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.551 [2024-11-06 13:57:27.304413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:33.551 [2024-11-06 13:57:27.304434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:31:33.551 [2024-11-06 13:57:27.304445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.551 [2024-11-06 13:57:27.347505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.551 [2024-11-06 13:57:27.347575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:33.551 [2024-11-06 13:57:27.347597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.954 ms 00:31:33.551 [2024-11-06 13:57:27.347609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.551 [2024-11-06 13:57:27.388590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.551 [2024-11-06 13:57:27.388655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:33.551 [2024-11-06 13:57:27.388676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.922 ms 00:31:33.551 [2024-11-06 13:57:27.388687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.551 [2024-11-06 13:57:27.389412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.551 [2024-11-06 13:57:27.389434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:33.551 [2024-11-06 13:57:27.389450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.687 ms 00:31:33.551 [2024-11-06 13:57:27.389466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.551 [2024-11-06 13:57:27.506688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.551 [2024-11-06 13:57:27.506771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:33.551 [2024-11-06 13:57:27.506801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 117.138 ms 00:31:33.551 [2024-11-06 13:57:27.506813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.810 [2024-11-06 13:57:27.554682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.810 [2024-11-06 13:57:27.554773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:33.810 [2024-11-06 13:57:27.554811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.669 ms 00:31:33.810 [2024-11-06 13:57:27.554823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.810 [2024-11-06 13:57:27.598599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.810 [2024-11-06 13:57:27.598676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:33.810 [2024-11-06 13:57:27.598699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.660 ms 00:31:33.810 [2024-11-06 13:57:27.598710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.810 [2024-11-06 13:57:27.644402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.810 [2024-11-06 13:57:27.644477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:33.810 [2024-11-06 13:57:27.644505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.576 ms 00:31:33.810 [2024-11-06 13:57:27.644517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.810 [2024-11-06 13:57:27.644632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.810 [2024-11-06 13:57:27.644646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:33.810 [2024-11-06 13:57:27.644667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:33.810 [2024-11-06 13:57:27.644679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.810 [2024-11-06 13:57:27.644844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.810 [2024-11-06 13:57:27.644857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:33.810 [2024-11-06 13:57:27.644877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:33.810 [2024-11-06 13:57:27.644887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.810 [2024-11-06 13:57:27.646454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2973.232 ms, result 0 00:31:33.810 { 00:31:33.810 "name": "ftl", 00:31:33.810 "uuid": "0677a2d7-978c-45f6-8156-1b7732eb1581" 00:31:33.810 } 00:31:33.810 13:57:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:34.069 [2024-11-06 13:57:27.949229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.069 13:57:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:34.328 13:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:34.587 [2024-11-06 13:57:28.438830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:34.587 13:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:34.846 [2024-11-06 13:57:28.709665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:34.846 13:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:35.414 Fill FTL, iteration 1 00:31:35.414 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80969 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80969 /var/tmp/spdk.tgt.sock 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80969 ']' 00:31:35.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:35.415 13:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:35.415 [2024-11-06 13:57:29.233163] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:31:35.415 [2024-11-06 13:57:29.233286] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80969 ] 00:31:35.673 [2024-11-06 13:57:29.416642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.673 [2024-11-06 13:57:29.577708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.610 13:57:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:36.610 13:57:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:31:36.610 13:57:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:36.869 ftln1 00:31:36.869 13:57:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:36.869 13:57:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:37.128 13:57:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:37.128 13:57:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80969 00:31:37.128 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80969 ']' 00:31:37.128 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80969 00:31:37.128 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:31:37.128 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:37.128 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80969 00:31:37.387 killing process with pid 80969 00:31:37.387 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:37.387 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:37.387 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80969' 00:31:37.387 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80969 00:31:37.387 13:57:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80969 00:31:39.923 13:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:39.923 13:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:39.923 [2024-11-06 13:57:33.626755] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:31:39.923 [2024-11-06 13:57:33.626923] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81024 ] 00:31:39.923 [2024-11-06 13:57:33.819340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.182 [2024-11-06 13:57:33.937316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.560  [2024-11-06T13:57:36.479Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-06T13:57:37.416Z] Copying: 476/1024 [MB] (232 MBps) [2024-11-06T13:57:38.873Z] Copying: 708/1024 [MB] (232 MBps) [2024-11-06T13:57:38.873Z] Copying: 934/1024 [MB] (226 MBps) [2024-11-06T13:57:40.256Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:31:46.273 00:31:46.273 Calculate MD5 checksum, iteration 1 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:46.273 13:57:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:46.273 [2024-11-06 13:57:40.138663] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:31:46.273 [2024-11-06 13:57:40.139083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81094 ] 00:31:46.532 [2024-11-06 13:57:40.314117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.532 [2024-11-06 13:57:40.433924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.436  [2024-11-06T13:57:42.678Z] Copying: 581/1024 [MB] (581 MBps) [2024-11-06T13:57:43.612Z] Copying: 1024/1024 [MB] (average 589 MBps) 00:31:49.629 00:31:49.629 13:57:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:49.629 13:57:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:51.533 Fill FTL, iteration 2 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=db22e35ff2b54279287ceec3b355e83b 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:51.533 13:57:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:51.792 [2024-11-06 13:57:45.541106] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:31:51.792 [2024-11-06 13:57:45.541235] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81154 ] 00:31:51.792 [2024-11-06 13:57:45.725762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.051 [2024-11-06 13:57:45.923702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.953  [2024-11-06T13:57:48.504Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-06T13:57:49.441Z] Copying: 486/1024 [MB] (244 MBps) [2024-11-06T13:57:50.817Z] Copying: 731/1024 [MB] (245 MBps) [2024-11-06T13:57:50.817Z] Copying: 977/1024 [MB] (246 MBps) [2024-11-06T13:57:52.195Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:31:58.212 00:31:58.212 Calculate MD5 checksum, iteration 2 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:58.212 13:57:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:58.212 [2024-11-06 13:57:51.925066] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:31:58.212 [2024-11-06 13:57:51.925473] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81218 ] 00:31:58.212 [2024-11-06 13:57:52.119187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.472 [2024-11-06 13:57:52.267471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.438  [2024-11-06T13:57:54.679Z] Copying: 604/1024 [MB] (604 MBps) [2024-11-06T13:57:56.058Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:32:02.075 00:32:02.334 13:57:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:02.334 13:57:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:04.239 13:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:04.239 13:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2b4fec4495d227f2ce0e1507f3dc7ea7 00:32:04.239 13:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:04.239 13:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:04.239 13:57:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:04.239 [2024-11-06 13:57:58.110659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.239 [2024-11-06 13:57:58.110950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:04.239 [2024-11-06 13:57:58.110979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:04.239 [2024-11-06 13:57:58.110992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.239 [2024-11-06 13:57:58.111060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.239 [2024-11-06 13:57:58.111075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:04.239 [2024-11-06 13:57:58.111094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:04.239 [2024-11-06 13:57:58.111107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.239 [2024-11-06 13:57:58.111129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.239 [2024-11-06 13:57:58.111142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:04.239 [2024-11-06 13:57:58.111153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:04.239 [2024-11-06 13:57:58.111164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.239 [2024-11-06 13:57:58.111237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.575 ms, result 0 00:32:04.239 true 00:32:04.239 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:04.499 { 00:32:04.499 "name": "ftl", 00:32:04.499 "properties": [ 00:32:04.499 { 00:32:04.499 "name": "superblock_version", 00:32:04.499 "value": 5, 00:32:04.499 "read-only": true 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "name": "base_device", 00:32:04.499 "bands": [ 00:32:04.499 { 00:32:04.499 "id": 0, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 1, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 2, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 3, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 4, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 5, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 6, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 7, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 8, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 9, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 10, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 11, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 12, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 13, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 14, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 15, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 16, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 17, 00:32:04.499 "state": "FREE", 00:32:04.499 "validity": 0.0 00:32:04.499 } 00:32:04.499 ], 00:32:04.499 "read-only": true 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "name": "cache_device", 00:32:04.499 "type": "bdev", 00:32:04.499 "chunks": [ 00:32:04.499 { 00:32:04.499 "id": 0, 00:32:04.499 "state": "INACTIVE", 00:32:04.499 "utilization": 0.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 1, 00:32:04.499 "state": "CLOSED", 00:32:04.499 "utilization": 1.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 2, 00:32:04.499 "state": "CLOSED", 00:32:04.499 "utilization": 1.0 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 3, 00:32:04.499 "state": "OPEN", 00:32:04.499 "utilization": 0.001953125 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "id": 4, 00:32:04.499 "state": "OPEN", 00:32:04.499 "utilization": 0.0 00:32:04.499 } 00:32:04.499 ], 00:32:04.499 "read-only": true 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "name": "verbose_mode", 00:32:04.499 "value": true, 00:32:04.499 "unit": "", 00:32:04.499 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:04.499 }, 00:32:04.499 { 00:32:04.499 "name": "prep_upgrade_on_shutdown", 00:32:04.499 "value": false, 00:32:04.499 "unit": "", 00:32:04.499 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:04.499 } 00:32:04.499 ] 00:32:04.499 } 00:32:04.499 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:04.758 [2024-11-06 13:57:58.575110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.758 [2024-11-06 13:57:58.575182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:04.758 [2024-11-06 13:57:58.575199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:04.758 [2024-11-06 13:57:58.575211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.758 [2024-11-06 13:57:58.575239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.758 [2024-11-06 13:57:58.575251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:04.758 [2024-11-06 13:57:58.575262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:04.758 [2024-11-06 13:57:58.575272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.758 [2024-11-06 13:57:58.575293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:04.758 [2024-11-06 13:57:58.575304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:04.758 [2024-11-06 13:57:58.575314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:04.758 [2024-11-06 13:57:58.575324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:04.758 [2024-11-06 13:57:58.575390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.273 ms, result 0 00:32:04.758 true 00:32:04.758 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:04.758 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:04.758 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:05.017 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:05.017 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:05.017 13:57:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:05.276 [2024-11-06 13:57:59.127632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.276 [2024-11-06 13:57:59.127704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:05.276 [2024-11-06 13:57:59.127724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:05.276 [2024-11-06 13:57:59.127736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.276 [2024-11-06 13:57:59.127764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.276 [2024-11-06 13:57:59.127777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:05.276 [2024-11-06 13:57:59.127788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:05.276 [2024-11-06 13:57:59.127799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.276 [2024-11-06 13:57:59.127820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.276 [2024-11-06 13:57:59.127832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:05.276 [2024-11-06 13:57:59.127843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:05.276 [2024-11-06 13:57:59.127853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.276 [2024-11-06 13:57:59.127921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.285 ms, result 0 00:32:05.276 true 00:32:05.276 13:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:05.535 { 00:32:05.535 "name": "ftl", 00:32:05.535 "properties": [ 00:32:05.535 { 00:32:05.535 "name": "superblock_version", 00:32:05.535 "value": 5, 00:32:05.535 "read-only": true 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "name": "base_device", 00:32:05.535 "bands": [ 00:32:05.535 { 00:32:05.535 "id": 0, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 1, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 2, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 3, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 4, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 5, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 6, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 7, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 8, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 9, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 10, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 11, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 12, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 13, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 14, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 15, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 16, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 17, 00:32:05.535 "state": "FREE", 00:32:05.535 "validity": 0.0 00:32:05.535 } 00:32:05.535 ], 00:32:05.535 "read-only": true 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "name": "cache_device", 00:32:05.535 "type": "bdev", 00:32:05.535 "chunks": [ 00:32:05.535 { 00:32:05.535 "id": 0, 00:32:05.535 "state": "INACTIVE", 00:32:05.535 "utilization": 0.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 1, 00:32:05.535 "state": "CLOSED", 00:32:05.535 "utilization": 1.0 00:32:05.535 }, 00:32:05.535 { 00:32:05.535 "id": 2, 00:32:05.535 "state": "CLOSED", 00:32:05.536 "utilization": 1.0 00:32:05.536 }, 00:32:05.536 { 00:32:05.536 "id": 3, 00:32:05.536 "state": "OPEN", 00:32:05.536 "utilization": 0.001953125 00:32:05.536 }, 00:32:05.536 { 00:32:05.536 "id": 4, 00:32:05.536 "state": "OPEN", 00:32:05.536 "utilization": 0.0 00:32:05.536 } 00:32:05.536 ], 00:32:05.536 "read-only": true 00:32:05.536 }, 00:32:05.536 { 00:32:05.536 "name": "verbose_mode", 00:32:05.536 "value": true, 00:32:05.536 "unit": "", 00:32:05.536 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:05.536 }, 00:32:05.536 { 00:32:05.536 "name": "prep_upgrade_on_shutdown", 00:32:05.536 "value": true, 00:32:05.536 "unit": "", 00:32:05.536 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:05.536 } 00:32:05.536 ] 00:32:05.536 } 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80848 ]] 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80848 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80848 ']' 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80848 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80848 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:05.536 killing process with pid 80848 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80848' 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80848 00:32:05.536 13:57:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80848 00:32:06.914 [2024-11-06 13:58:00.765380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:06.914 [2024-11-06 13:58:00.786558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.914 [2024-11-06 13:58:00.786599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:06.914 [2024-11-06 13:58:00.786616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:06.914 [2024-11-06 13:58:00.786627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.914 [2024-11-06 13:58:00.786651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:06.914 [2024-11-06 13:58:00.791319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.914 [2024-11-06 13:58:00.791349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:06.914 [2024-11-06 13:58:00.791361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.651 ms 00:32:06.914 [2024-11-06 13:58:00.791371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:07.968551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:07.968647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:15.135 [2024-11-06 13:58:07.968665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7177.096 ms 00:32:15.135 [2024-11-06 13:58:07.968682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:07.969858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:07.969887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:15.135 [2024-11-06 13:58:07.969900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.154 ms 00:32:15.135 [2024-11-06 13:58:07.969913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:07.970890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:07.970916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:15.135 [2024-11-06 13:58:07.970929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:32:15.135 [2024-11-06 13:58:07.970948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:07.987042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:07.987077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:15.135 [2024-11-06 13:58:07.987091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.046 ms 00:32:15.135 [2024-11-06 13:58:07.987103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:07.997703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:07.997759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:15.135 [2024-11-06 13:58:07.997778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.557 ms 00:32:15.135 [2024-11-06 13:58:07.997790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:07.997920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:07.997935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:15.135 [2024-11-06 13:58:07.997956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:32:15.135 [2024-11-06 13:58:07.997968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:08.015199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:08.015272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:15.135 [2024-11-06 13:58:08.015291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.208 ms 00:32:15.135 [2024-11-06 13:58:08.015305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:08.032468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:08.032540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:15.135 [2024-11-06 13:58:08.032557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.104 ms 00:32:15.135 [2024-11-06 13:58:08.032570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:08.049283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:08.049381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:15.135 [2024-11-06 13:58:08.049400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.659 ms 00:32:15.135 [2024-11-06 13:58:08.049411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:08.065199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.135 [2024-11-06 13:58:08.065250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:15.135 [2024-11-06 13:58:08.065266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.657 ms 00:32:15.135 [2024-11-06 13:58:08.065277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.135 [2024-11-06 13:58:08.065317] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:15.135 [2024-11-06 13:58:08.065340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:15.135 [2024-11-06 13:58:08.065356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:15.135 [2024-11-06 13:58:08.065388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:15.135 [2024-11-06 13:58:08.065401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:15.135 [2024-11-06 13:58:08.065580] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:15.135 [2024-11-06 13:58:08.065591] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0677a2d7-978c-45f6-8156-1b7732eb1581 00:32:15.135 [2024-11-06 13:58:08.065604] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:15.136 [2024-11-06 13:58:08.065621] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:15.136 [2024-11-06 13:58:08.065633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:15.136 [2024-11-06 13:58:08.065645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:15.136 [2024-11-06 13:58:08.065655] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:15.136 [2024-11-06 13:58:08.065672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:15.136 [2024-11-06 13:58:08.065683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:15.136 [2024-11-06 13:58:08.065692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:15.136 [2024-11-06 13:58:08.065701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:15.136 [2024-11-06 13:58:08.065712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.136 [2024-11-06 13:58:08.065728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:15.136 [2024-11-06 13:58:08.065741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.397 ms 00:32:15.136 [2024-11-06 13:58:08.065753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.088994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.136 [2024-11-06 13:58:08.089081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:15.136 [2024-11-06 13:58:08.089100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.200 ms 00:32:15.136 [2024-11-06 13:58:08.089124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.089859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:15.136 [2024-11-06 13:58:08.089878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:15.136 [2024-11-06 13:58:08.089890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.671 ms 00:32:15.136 [2024-11-06 13:58:08.089901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.163164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.163246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:15.136 [2024-11-06 13:58:08.163274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.163286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.163364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.163378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:15.136 [2024-11-06 13:58:08.163390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.163401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.163576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.163592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:15.136 [2024-11-06 13:58:08.163605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.163621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.163643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.163655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:15.136 [2024-11-06 13:58:08.163667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.163677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.310520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.310613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:15.136 [2024-11-06 13:58:08.310644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.310656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.427133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.427205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:15.136 [2024-11-06 13:58:08.427224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.427236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.427382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.427397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:15.136 [2024-11-06 13:58:08.427410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.427423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.427505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.427518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:15.136 [2024-11-06 13:58:08.427531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.427542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.427693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.427707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:15.136 [2024-11-06 13:58:08.427720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.427731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.427773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.427791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:15.136 [2024-11-06 13:58:08.427803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.427814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.427864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.427876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:15.136 [2024-11-06 13:58:08.427887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.427897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.427954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:15.136 [2024-11-06 13:58:08.427967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:15.136 [2024-11-06 13:58:08.427979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:15.136 [2024-11-06 13:58:08.428005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:15.136 [2024-11-06 13:58:08.428194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7641.549 ms, result 0 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81418 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81418 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81418 ']' 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:18.422 13:58:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:18.422 [2024-11-06 13:58:11.898757] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:32:18.422 [2024-11-06 13:58:11.898935] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81418 ] 00:32:18.422 [2024-11-06 13:58:12.092198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.422 [2024-11-06 13:58:12.240452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.798 [2024-11-06 13:58:13.440876] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:19.798 [2024-11-06 13:58:13.440963] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:19.798 [2024-11-06 13:58:13.591185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.591266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:19.798 [2024-11-06 13:58:13.591285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:19.798 [2024-11-06 13:58:13.591299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.591378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.591394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:19.798 [2024-11-06 13:58:13.591407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:19.798 [2024-11-06 13:58:13.591419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.591448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:19.798 [2024-11-06 13:58:13.592513] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:19.798 [2024-11-06 13:58:13.592544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.592557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:19.798 [2024-11-06 13:58:13.592569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.102 ms 00:32:19.798 [2024-11-06 13:58:13.592581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.595236] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:19.798 [2024-11-06 13:58:13.616605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.616647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:19.798 [2024-11-06 13:58:13.616677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.370 ms 00:32:19.798 [2024-11-06 13:58:13.616689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.616771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.616785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:19.798 [2024-11-06 13:58:13.616798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:32:19.798 [2024-11-06 13:58:13.616809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.629888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.629918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:19.798 [2024-11-06 13:58:13.629931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.990 ms 00:32:19.798 [2024-11-06 13:58:13.629943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.630035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.630052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:19.798 [2024-11-06 13:58:13.630064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:32:19.798 [2024-11-06 13:58:13.630076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.630144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.630156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:19.798 [2024-11-06 13:58:13.630174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:19.798 [2024-11-06 13:58:13.630185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.630217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:19.798 [2024-11-06 13:58:13.636114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.636146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:19.798 [2024-11-06 13:58:13.636158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.906 ms 00:32:19.798 [2024-11-06 13:58:13.636174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.636205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.636216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:19.798 [2024-11-06 13:58:13.636227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:19.798 [2024-11-06 13:58:13.636238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.636283] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:19.798 [2024-11-06 13:58:13.636309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:19.798 [2024-11-06 13:58:13.636354] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:19.798 [2024-11-06 13:58:13.636373] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:19.798 [2024-11-06 13:58:13.636470] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:19.798 [2024-11-06 13:58:13.636484] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:19.798 [2024-11-06 13:58:13.636498] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:19.798 [2024-11-06 13:58:13.636511] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:19.798 [2024-11-06 13:58:13.636524] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:19.798 [2024-11-06 13:58:13.636540] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:19.798 [2024-11-06 13:58:13.636550] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:19.798 [2024-11-06 13:58:13.636560] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:19.798 [2024-11-06 13:58:13.636571] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:19.798 [2024-11-06 13:58:13.636582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.636593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:19.798 [2024-11-06 13:58:13.636604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:32:19.798 [2024-11-06 13:58:13.636615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.636689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.798 [2024-11-06 13:58:13.636701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:19.798 [2024-11-06 13:58:13.636711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:19.798 [2024-11-06 13:58:13.636726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.798 [2024-11-06 13:58:13.636822] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:19.798 [2024-11-06 13:58:13.636841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:19.798 [2024-11-06 13:58:13.636854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:19.798 [2024-11-06 13:58:13.636865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.798 [2024-11-06 13:58:13.636877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:19.798 [2024-11-06 13:58:13.636886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:19.798 [2024-11-06 13:58:13.636897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:19.798 [2024-11-06 13:58:13.636907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:19.798 [2024-11-06 13:58:13.636918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:19.798 [2024-11-06 13:58:13.636927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.798 [2024-11-06 13:58:13.636938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:19.798 [2024-11-06 13:58:13.636947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:19.798 [2024-11-06 13:58:13.636956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.798 [2024-11-06 13:58:13.636966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:19.798 [2024-11-06 13:58:13.636975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:19.798 [2024-11-06 13:58:13.636984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.798 [2024-11-06 13:58:13.636994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:19.798 [2024-11-06 13:58:13.637003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:19.798 [2024-11-06 13:58:13.637012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.798 [2024-11-06 13:58:13.637032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:19.798 [2024-11-06 13:58:13.637042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:19.798 [2024-11-06 13:58:13.637052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.798 [2024-11-06 13:58:13.637061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:19.798 [2024-11-06 13:58:13.637070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:19.798 [2024-11-06 13:58:13.637079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.798 [2024-11-06 13:58:13.637101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:19.798 [2024-11-06 13:58:13.637110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:19.798 [2024-11-06 13:58:13.637119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.798 [2024-11-06 13:58:13.637130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:19.798 [2024-11-06 13:58:13.637140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:19.798 [2024-11-06 13:58:13.637150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:19.798 [2024-11-06 13:58:13.637159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:19.798 [2024-11-06 13:58:13.637169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:19.799 [2024-11-06 13:58:13.637178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.799 [2024-11-06 13:58:13.637188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:19.799 [2024-11-06 13:58:13.637197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:19.799 [2024-11-06 13:58:13.637206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.799 [2024-11-06 13:58:13.637215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:19.799 [2024-11-06 13:58:13.637225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:19.799 [2024-11-06 13:58:13.637235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.799 [2024-11-06 13:58:13.637247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:19.799 [2024-11-06 13:58:13.637257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:19.799 [2024-11-06 13:58:13.637266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.799 [2024-11-06 13:58:13.637275] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:19.799 [2024-11-06 13:58:13.637286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:19.799 [2024-11-06 13:58:13.637296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:19.799 [2024-11-06 13:58:13.637307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:19.799 [2024-11-06 13:58:13.637322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:19.799 [2024-11-06 13:58:13.637332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:19.799 [2024-11-06 13:58:13.637342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:19.799 [2024-11-06 13:58:13.637352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:19.799 [2024-11-06 13:58:13.637361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:19.799 [2024-11-06 13:58:13.637371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:19.799 [2024-11-06 13:58:13.637382] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:19.799 [2024-11-06 13:58:13.637396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:19.799 [2024-11-06 13:58:13.637419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:19.799 [2024-11-06 13:58:13.637450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:19.799 [2024-11-06 13:58:13.637461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:19.799 [2024-11-06 13:58:13.637471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:19.799 [2024-11-06 13:58:13.637482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:19.799 [2024-11-06 13:58:13.637556] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:19.799 [2024-11-06 13:58:13.637568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:19.799 [2024-11-06 13:58:13.637603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:19.799 [2024-11-06 13:58:13.637618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:19.799 [2024-11-06 13:58:13.637631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:19.799 [2024-11-06 13:58:13.637643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.799 [2024-11-06 13:58:13.637654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:19.799 [2024-11-06 13:58:13.637665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:32:19.799 [2024-11-06 13:58:13.637676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.799 [2024-11-06 13:58:13.637731] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:19.799 [2024-11-06 13:58:13.637745] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:23.086 [2024-11-06 13:58:16.336555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.336646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:23.086 [2024-11-06 13:58:16.336666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2698.807 ms 00:32:23.086 [2024-11-06 13:58:16.336680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.385943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.386030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:23.086 [2024-11-06 13:58:16.386051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.829 ms 00:32:23.086 [2024-11-06 13:58:16.386064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.386213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.386236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:23.086 [2024-11-06 13:58:16.386250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:23.086 [2024-11-06 13:58:16.386261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.447418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.447490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:23.086 [2024-11-06 13:58:16.447509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 61.101 ms 00:32:23.086 [2024-11-06 13:58:16.447528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.447634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.447650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:23.086 [2024-11-06 13:58:16.447663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:23.086 [2024-11-06 13:58:16.447675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.448583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.448607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:23.086 [2024-11-06 13:58:16.448620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.799 ms 00:32:23.086 [2024-11-06 13:58:16.448632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.448691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.448705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:23.086 [2024-11-06 13:58:16.448717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:23.086 [2024-11-06 13:58:16.448729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.477633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.477698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:23.086 [2024-11-06 13:58:16.477716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.874 ms 00:32:23.086 [2024-11-06 13:58:16.477730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.517271] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:23.086 [2024-11-06 13:58:16.517334] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:23.086 [2024-11-06 13:58:16.517353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.517367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:23.086 [2024-11-06 13:58:16.517381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.421 ms 00:32:23.086 [2024-11-06 13:58:16.517393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.540181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.540228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:23.086 [2024-11-06 13:58:16.540245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.729 ms 00:32:23.086 [2024-11-06 13:58:16.540257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.086 [2024-11-06 13:58:16.561244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.086 [2024-11-06 13:58:16.561287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:23.087 [2024-11-06 13:58:16.561303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.927 ms 00:32:23.087 [2024-11-06 13:58:16.561316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.583028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.583093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:23.087 [2024-11-06 13:58:16.583109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.651 ms 00:32:23.087 [2024-11-06 13:58:16.583121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.584192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.584230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:23.087 [2024-11-06 13:58:16.584244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.938 ms 00:32:23.087 [2024-11-06 13:58:16.584256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.693943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.694035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:23.087 [2024-11-06 13:58:16.694056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 109.654 ms 00:32:23.087 [2024-11-06 13:58:16.694069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.708047] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:23.087 [2024-11-06 13:58:16.709667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.709696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:23.087 [2024-11-06 13:58:16.709712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.523 ms 00:32:23.087 [2024-11-06 13:58:16.709725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.709871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.709892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:23.087 [2024-11-06 13:58:16.709905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:23.087 [2024-11-06 13:58:16.709917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.709997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.710011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:23.087 [2024-11-06 13:58:16.710037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:23.087 [2024-11-06 13:58:16.710049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.710081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.710094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:23.087 [2024-11-06 13:58:16.710111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:23.087 [2024-11-06 13:58:16.710123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.710168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:23.087 [2024-11-06 13:58:16.710183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.710196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:23.087 [2024-11-06 13:58:16.710208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:23.087 [2024-11-06 13:58:16.710220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.753567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.753635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:23.087 [2024-11-06 13:58:16.753655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.319 ms 00:32:23.087 [2024-11-06 13:58:16.753668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.753775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.087 [2024-11-06 13:58:16.753790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:23.087 [2024-11-06 13:58:16.753805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:32:23.087 [2024-11-06 13:58:16.753817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.087 [2024-11-06 13:58:16.755515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3163.686 ms, result 0 00:32:23.087 [2024-11-06 13:58:16.770064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.087 [2024-11-06 13:58:16.786097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:23.087 [2024-11-06 13:58:16.797301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:23.087 13:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:23.087 13:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:32:23.087 13:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:23.087 13:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:23.087 13:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:23.346 [2024-11-06 13:58:17.089378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.346 [2024-11-06 13:58:17.089454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:23.346 [2024-11-06 13:58:17.089473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:23.346 [2024-11-06 13:58:17.089489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.346 [2024-11-06 13:58:17.089518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.346 [2024-11-06 13:58:17.089531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:23.346 [2024-11-06 13:58:17.089542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:23.346 [2024-11-06 13:58:17.089554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.346 [2024-11-06 13:58:17.089577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.346 [2024-11-06 13:58:17.089590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:23.346 [2024-11-06 13:58:17.089601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:23.346 [2024-11-06 13:58:17.089612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.346 [2024-11-06 13:58:17.089685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.311 ms, result 0 00:32:23.346 true 00:32:23.346 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:23.346 { 00:32:23.346 "name": "ftl", 00:32:23.346 "properties": [ 00:32:23.346 { 00:32:23.346 "name": "superblock_version", 00:32:23.346 "value": 5, 00:32:23.346 "read-only": true 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "name": "base_device", 00:32:23.346 "bands": [ 00:32:23.346 { 00:32:23.346 "id": 0, 00:32:23.346 "state": "CLOSED", 00:32:23.346 "validity": 1.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 1, 00:32:23.346 "state": "CLOSED", 00:32:23.346 "validity": 1.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 2, 00:32:23.346 "state": "CLOSED", 00:32:23.346 "validity": 0.007843137254901933 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 3, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 4, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 5, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 6, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 7, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 8, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 9, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 10, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 11, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 12, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 13, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 14, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 15, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 16, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "id": 17, 00:32:23.346 "state": "FREE", 00:32:23.346 "validity": 0.0 00:32:23.346 } 00:32:23.346 ], 00:32:23.346 "read-only": true 00:32:23.346 }, 00:32:23.346 { 00:32:23.346 "name": "cache_device", 00:32:23.346 "type": "bdev", 00:32:23.346 "chunks": [ 00:32:23.347 { 00:32:23.347 "id": 0, 00:32:23.347 "state": "INACTIVE", 00:32:23.347 "utilization": 0.0 00:32:23.347 }, 00:32:23.347 { 00:32:23.347 "id": 1, 00:32:23.347 "state": "OPEN", 00:32:23.347 "utilization": 0.0 00:32:23.347 }, 00:32:23.347 { 00:32:23.347 "id": 2, 00:32:23.347 "state": "OPEN", 00:32:23.347 "utilization": 0.0 00:32:23.347 }, 00:32:23.347 { 00:32:23.347 "id": 3, 00:32:23.347 "state": "FREE", 00:32:23.347 "utilization": 0.0 00:32:23.347 }, 00:32:23.347 { 00:32:23.347 "id": 4, 00:32:23.347 "state": "FREE", 00:32:23.347 "utilization": 0.0 00:32:23.347 } 00:32:23.347 ], 00:32:23.347 "read-only": true 00:32:23.347 }, 00:32:23.347 { 00:32:23.347 "name": "verbose_mode", 00:32:23.347 "value": true, 00:32:23.347 "unit": "", 00:32:23.347 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:23.347 }, 00:32:23.347 { 00:32:23.347 "name": "prep_upgrade_on_shutdown", 00:32:23.347 "value": false, 00:32:23.347 "unit": "", 00:32:23.347 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:23.347 } 00:32:23.347 ] 00:32:23.347 } 00:32:23.347 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:23.347 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:23.347 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:23.605 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:23.605 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:23.605 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:23.605 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:23.605 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:24.172 Validate MD5 checksum, iteration 1 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:24.172 13:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:24.172 [2024-11-06 13:58:17.953844] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:32:24.172 [2024-11-06 13:58:17.953974] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81494 ] 00:32:24.172 [2024-11-06 13:58:18.130651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.431 [2024-11-06 13:58:18.262302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.336  [2024-11-06T13:58:20.886Z] Copying: 609/1024 [MB] (609 MBps) [2024-11-06T13:58:22.788Z] Copying: 1024/1024 [MB] (average 570 MBps) 00:32:28.805 00:32:28.805 13:58:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:28.805 13:58:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=db22e35ff2b54279287ceec3b355e83b 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ db22e35ff2b54279287ceec3b355e83b != \d\b\2\2\e\3\5\f\f\2\b\5\4\2\7\9\2\8\7\c\e\e\c\3\b\3\5\5\e\8\3\b ]] 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:30.707 Validate MD5 checksum, iteration 2 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:30.707 13:58:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:30.707 [2024-11-06 13:58:24.464967] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:32:30.707 [2024-11-06 13:58:24.465156] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81561 ] 00:32:30.707 [2024-11-06 13:58:24.651864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.966 [2024-11-06 13:58:24.776039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.892  [2024-11-06T13:58:27.443Z] Copying: 593/1024 [MB] (593 MBps) [2024-11-06T13:58:29.972Z] Copying: 1024/1024 [MB] (average 561 MBps) 00:32:35.989 00:32:35.989 13:58:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:35.989 13:58:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2b4fec4495d227f2ce0e1507f3dc7ea7 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2b4fec4495d227f2ce0e1507f3dc7ea7 != \2\b\4\f\e\c\4\4\9\5\d\2\2\7\f\2\c\e\0\e\1\5\0\7\f\3\d\c\7\e\a\7 ]] 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81418 ]] 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81418 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81639 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81639 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81639 ']' 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:37.890 13:58:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:37.890 [2024-11-06 13:58:31.831631] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:32:37.890 [2024-11-06 13:58:31.831783] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81639 ] 00:32:38.149 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81418 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:38.149 [2024-11-06 13:58:32.015285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.408 [2024-11-06 13:58:32.191814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.343 [2024-11-06 13:58:33.204153] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:39.343 [2024-11-06 13:58:33.204227] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:39.602 [2024-11-06 13:58:33.351920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.351981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:39.602 [2024-11-06 13:58:33.351998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:39.602 [2024-11-06 13:58:33.352009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.352088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.352104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:39.602 [2024-11-06 13:58:33.352115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:32:39.602 [2024-11-06 13:58:33.352125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.352150] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:39.602 [2024-11-06 13:58:33.353245] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:39.602 [2024-11-06 13:58:33.353452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.353469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:39.602 [2024-11-06 13:58:33.353481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.306 ms 00:32:39.602 [2024-11-06 13:58:33.353491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.353922] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:39.602 [2024-11-06 13:58:33.378534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.378576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:39.602 [2024-11-06 13:58:33.378592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.611 ms 00:32:39.602 [2024-11-06 13:58:33.378603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.394134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.394288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:39.602 [2024-11-06 13:58:33.394315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:39.602 [2024-11-06 13:58:33.394327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.394913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.394939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:39.602 [2024-11-06 13:58:33.394952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.487 ms 00:32:39.602 [2024-11-06 13:58:33.394964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.395083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.395099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:39.602 [2024-11-06 13:58:33.395112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.097 ms 00:32:39.602 [2024-11-06 13:58:33.395123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.395155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.395168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:39.602 [2024-11-06 13:58:33.395179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:39.602 [2024-11-06 13:58:33.395191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.602 [2024-11-06 13:58:33.395220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:39.602 [2024-11-06 13:58:33.399500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.602 [2024-11-06 13:58:33.399532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:39.602 [2024-11-06 13:58:33.399544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.287 ms 00:32:39.603 [2024-11-06 13:58:33.399554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.603 [2024-11-06 13:58:33.399591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.603 [2024-11-06 13:58:33.399603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:39.603 [2024-11-06 13:58:33.399614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:39.603 [2024-11-06 13:58:33.399624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.603 [2024-11-06 13:58:33.399665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:39.603 [2024-11-06 13:58:33.399687] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:39.603 [2024-11-06 13:58:33.399723] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:39.603 [2024-11-06 13:58:33.399744] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:39.603 [2024-11-06 13:58:33.399833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:39.603 [2024-11-06 13:58:33.399847] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:39.603 [2024-11-06 13:58:33.399860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:39.603 [2024-11-06 13:58:33.399873] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:39.603 [2024-11-06 13:58:33.399885] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:39.603 [2024-11-06 13:58:33.399896] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:39.603 [2024-11-06 13:58:33.399906] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:39.603 [2024-11-06 13:58:33.399916] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:39.603 [2024-11-06 13:58:33.399926] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:39.603 [2024-11-06 13:58:33.399936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.603 [2024-11-06 13:58:33.399949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:39.603 [2024-11-06 13:58:33.399960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:32:39.603 [2024-11-06 13:58:33.399970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.603 [2024-11-06 13:58:33.400064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.603 [2024-11-06 13:58:33.400077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:39.603 [2024-11-06 13:58:33.400087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:32:39.603 [2024-11-06 13:58:33.400097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.603 [2024-11-06 13:58:33.400189] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:39.603 [2024-11-06 13:58:33.400201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:39.603 [2024-11-06 13:58:33.400216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:39.603 [2024-11-06 13:58:33.400246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:39.603 [2024-11-06 13:58:33.400266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:39.603 [2024-11-06 13:58:33.400276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:39.603 [2024-11-06 13:58:33.400285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:39.603 [2024-11-06 13:58:33.400304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:39.603 [2024-11-06 13:58:33.400312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:39.603 [2024-11-06 13:58:33.400332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:39.603 [2024-11-06 13:58:33.400341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:39.603 [2024-11-06 13:58:33.400361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:39.603 [2024-11-06 13:58:33.400370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:39.603 [2024-11-06 13:58:33.400389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:39.603 [2024-11-06 13:58:33.400399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:39.603 [2024-11-06 13:58:33.400428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:39.603 [2024-11-06 13:58:33.400437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:39.603 [2024-11-06 13:58:33.400457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:39.603 [2024-11-06 13:58:33.400466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:39.603 [2024-11-06 13:58:33.400485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:39.603 [2024-11-06 13:58:33.400494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:39.603 [2024-11-06 13:58:33.400513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:39.603 [2024-11-06 13:58:33.400523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:39.603 [2024-11-06 13:58:33.400542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:39.603 [2024-11-06 13:58:33.400570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:39.603 [2024-11-06 13:58:33.400598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:39.603 [2024-11-06 13:58:33.400607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400616] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:39.603 [2024-11-06 13:58:33.400626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:39.603 [2024-11-06 13:58:33.400636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:39.603 [2024-11-06 13:58:33.400656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:39.603 [2024-11-06 13:58:33.400666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:39.603 [2024-11-06 13:58:33.400677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:39.603 [2024-11-06 13:58:33.400687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:39.603 [2024-11-06 13:58:33.400696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:39.603 [2024-11-06 13:58:33.400706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:39.603 [2024-11-06 13:58:33.400719] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:39.603 [2024-11-06 13:58:33.400732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:39.603 [2024-11-06 13:58:33.400755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:39.603 [2024-11-06 13:58:33.400788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:39.603 [2024-11-06 13:58:33.400798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:39.603 [2024-11-06 13:58:33.400809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:39.603 [2024-11-06 13:58:33.400820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:39.603 [2024-11-06 13:58:33.400894] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:39.603 [2024-11-06 13:58:33.400905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:39.603 [2024-11-06 13:58:33.400933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:39.603 [2024-11-06 13:58:33.400944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:39.604 [2024-11-06 13:58:33.400954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:39.604 [2024-11-06 13:58:33.400966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.400976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:39.604 [2024-11-06 13:58:33.400987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.832 ms 00:32:39.604 [2024-11-06 13:58:33.400997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.438618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.438668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:39.604 [2024-11-06 13:58:33.438684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.537 ms 00:32:39.604 [2024-11-06 13:58:33.438696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.438752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.438764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:39.604 [2024-11-06 13:58:33.438775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:39.604 [2024-11-06 13:58:33.438785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.488334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.488381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:39.604 [2024-11-06 13:58:33.488397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.467 ms 00:32:39.604 [2024-11-06 13:58:33.488408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.488474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.488486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:39.604 [2024-11-06 13:58:33.488498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:39.604 [2024-11-06 13:58:33.488508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.488658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.488673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:39.604 [2024-11-06 13:58:33.488684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:32:39.604 [2024-11-06 13:58:33.488694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.488737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.488748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:39.604 [2024-11-06 13:58:33.488759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:39.604 [2024-11-06 13:58:33.488769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.510918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.510968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:39.604 [2024-11-06 13:58:33.510986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.120 ms 00:32:39.604 [2024-11-06 13:58:33.511002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.511190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.511211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:39.604 [2024-11-06 13:58:33.511224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:32:39.604 [2024-11-06 13:58:33.511235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.550466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.550534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:39.604 [2024-11-06 13:58:33.550568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.199 ms 00:32:39.604 [2024-11-06 13:58:33.550580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.604 [2024-11-06 13:58:33.567014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.604 [2024-11-06 13:58:33.567075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:39.604 [2024-11-06 13:58:33.567100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.798 ms 00:32:39.604 [2024-11-06 13:58:33.567127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.863 [2024-11-06 13:58:33.662333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.863 [2024-11-06 13:58:33.662399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:39.863 [2024-11-06 13:58:33.662422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.117 ms 00:32:39.863 [2024-11-06 13:58:33.662434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.863 [2024-11-06 13:58:33.662652] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:39.863 [2024-11-06 13:58:33.662787] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:39.863 [2024-11-06 13:58:33.662913] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:39.863 [2024-11-06 13:58:33.663058] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:39.863 [2024-11-06 13:58:33.663074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.863 [2024-11-06 13:58:33.663086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:39.863 [2024-11-06 13:58:33.663099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:32:39.863 [2024-11-06 13:58:33.663110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.863 [2024-11-06 13:58:33.663209] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:39.863 [2024-11-06 13:58:33.663226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.863 [2024-11-06 13:58:33.663242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:39.863 [2024-11-06 13:58:33.663254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:39.863 [2024-11-06 13:58:33.663265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.863 [2024-11-06 13:58:33.687967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.864 [2024-11-06 13:58:33.688032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:39.864 [2024-11-06 13:58:33.688049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.674 ms 00:32:39.864 [2024-11-06 13:58:33.688059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.864 [2024-11-06 13:58:33.702604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.864 [2024-11-06 13:58:33.702802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:39.864 [2024-11-06 13:58:33.702824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:39.864 [2024-11-06 13:58:33.702837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.864 [2024-11-06 13:58:33.702969] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:39.864 [2024-11-06 13:58:33.703196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.864 [2024-11-06 13:58:33.703210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:39.864 [2024-11-06 13:58:33.703222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:32:39.864 [2024-11-06 13:58:33.703241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.430 [2024-11-06 13:58:34.199306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.430 [2024-11-06 13:58:34.199664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:40.430 [2024-11-06 13:58:34.199708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 494.778 ms 00:32:40.430 [2024-11-06 13:58:34.199731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.430 [2024-11-06 13:58:34.206245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.430 [2024-11-06 13:58:34.206291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:40.430 [2024-11-06 13:58:34.206308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.122 ms 00:32:40.430 [2024-11-06 13:58:34.206320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.430 [2024-11-06 13:58:34.206711] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:40.430 [2024-11-06 13:58:34.206735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.430 [2024-11-06 13:58:34.206748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:40.430 [2024-11-06 13:58:34.206761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:32:40.430 [2024-11-06 13:58:34.206773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.430 [2024-11-06 13:58:34.206806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.430 [2024-11-06 13:58:34.206820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:40.430 [2024-11-06 13:58:34.206832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:40.430 [2024-11-06 13:58:34.206843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.430 [2024-11-06 13:58:34.206890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 503.917 ms, result 0 00:32:40.430 [2024-11-06 13:58:34.206953] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:40.430 [2024-11-06 13:58:34.207061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.430 [2024-11-06 13:58:34.207075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:40.430 [2024-11-06 13:58:34.207087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.110 ms 00:32:40.430 [2024-11-06 13:58:34.207099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.693184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.693264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:40.997 [2024-11-06 13:58:34.693284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 484.663 ms 00:32:40.997 [2024-11-06 13:58:34.693298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.699544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.699598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:40.997 [2024-11-06 13:58:34.699614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.878 ms 00:32:40.997 [2024-11-06 13:58:34.699625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.700062] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:40.997 [2024-11-06 13:58:34.700086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.700098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:40.997 [2024-11-06 13:58:34.700123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:32:40.997 [2024-11-06 13:58:34.700134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.700172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.700185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:40.997 [2024-11-06 13:58:34.700197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:40.997 [2024-11-06 13:58:34.700208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.700252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 493.292 ms, result 0 00:32:40.997 [2024-11-06 13:58:34.700298] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:40.997 [2024-11-06 13:58:34.700312] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:40.997 [2024-11-06 13:58:34.700326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.700338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:40.997 [2024-11-06 13:58:34.700350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 997.372 ms 00:32:40.997 [2024-11-06 13:58:34.700361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.700398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.700410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:40.997 [2024-11-06 13:58:34.700426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:40.997 [2024-11-06 13:58:34.700437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.715091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:40.997 [2024-11-06 13:58:34.715477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.715502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:40.997 [2024-11-06 13:58:34.715518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.019 ms 00:32:40.997 [2024-11-06 13:58:34.715531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.716226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.716250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:40.997 [2024-11-06 13:58:34.716265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:32:40.997 [2024-11-06 13:58:34.716276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.718593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.718751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:40.997 [2024-11-06 13:58:34.718774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.289 ms 00:32:40.997 [2024-11-06 13:58:34.718787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.718852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.718866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:40.997 [2024-11-06 13:58:34.718879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:40.997 [2024-11-06 13:58:34.718895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.719037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.997 [2024-11-06 13:58:34.719053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:40.997 [2024-11-06 13:58:34.719066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:40.997 [2024-11-06 13:58:34.719077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.997 [2024-11-06 13:58:34.719104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.998 [2024-11-06 13:58:34.719117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:40.998 [2024-11-06 13:58:34.719129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:40.998 [2024-11-06 13:58:34.719141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.998 [2024-11-06 13:58:34.719183] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:40.998 [2024-11-06 13:58:34.719197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.998 [2024-11-06 13:58:34.719209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:40.998 [2024-11-06 13:58:34.719220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:40.998 [2024-11-06 13:58:34.719232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.998 [2024-11-06 13:58:34.719302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:40.998 [2024-11-06 13:58:34.719315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:40.998 [2024-11-06 13:58:34.719327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:32:40.998 [2024-11-06 13:58:34.719339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:40.998 [2024-11-06 13:58:34.720613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1368.181 ms, result 0 00:32:40.998 [2024-11-06 13:58:34.736186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.998 [2024-11-06 13:58:34.752164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:40.998 [2024-11-06 13:58:34.762797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:40.998 Validate MD5 checksum, iteration 1 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:40.998 13:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:40.998 [2024-11-06 13:58:34.946787] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:32:40.998 [2024-11-06 13:58:34.947180] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81678 ] 00:32:41.255 [2024-11-06 13:58:35.161212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.513 [2024-11-06 13:58:35.340041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.431  [2024-11-06T13:58:37.983Z] Copying: 626/1024 [MB] (626 MBps) [2024-11-06T13:58:42.166Z] Copying: 1024/1024 [MB] (average 614 MBps) 00:32:48.183 00:32:48.183 13:58:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:48.183 13:58:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:49.559 Validate MD5 checksum, iteration 2 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=db22e35ff2b54279287ceec3b355e83b 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ db22e35ff2b54279287ceec3b355e83b != \d\b\2\2\e\3\5\f\f\2\b\5\4\2\7\9\2\8\7\c\e\e\c\3\b\3\5\5\e\8\3\b ]] 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:49.559 13:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:49.559 [2024-11-06 13:58:43.434957] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:32:49.559 [2024-11-06 13:58:43.435511] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81769 ] 00:32:49.819 [2024-11-06 13:58:43.606768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.819 [2024-11-06 13:58:43.725992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.723  [2024-11-06T13:58:46.641Z] Copying: 557/1024 [MB] (557 MBps) [2024-11-06T13:58:48.018Z] Copying: 1024/1024 [MB] (average 536 MBps) 00:32:54.035 00:32:54.035 13:58:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:54.035 13:58:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2b4fec4495d227f2ce0e1507f3dc7ea7 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2b4fec4495d227f2ce0e1507f3dc7ea7 != \2\b\4\f\e\c\4\4\9\5\d\2\2\7\f\2\c\e\0\e\1\5\0\7\f\3\d\c\7\e\a\7 ]] 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81639 ]] 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81639 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81639 ']' 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81639 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81639 00:32:55.999 killing process with pid 81639 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81639' 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81639 00:32:55.999 13:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81639 00:32:57.375 [2024-11-06 13:58:51.126643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:57.375 [2024-11-06 13:58:51.148626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.375 [2024-11-06 13:58:51.148678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:57.375 [2024-11-06 13:58:51.148698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:57.375 [2024-11-06 13:58:51.148712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.375 [2024-11-06 13:58:51.148742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:57.375 [2024-11-06 13:58:51.153778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.375 [2024-11-06 13:58:51.153816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:57.375 [2024-11-06 13:58:51.153839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.015 ms 00:32:57.375 [2024-11-06 13:58:51.153851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.375 [2024-11-06 13:58:51.154119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.375 [2024-11-06 13:58:51.154137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:57.375 [2024-11-06 13:58:51.154151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.234 ms 00:32:57.375 [2024-11-06 13:58:51.154166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.375 [2024-11-06 13:58:51.155315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.375 [2024-11-06 13:58:51.155503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:57.375 [2024-11-06 13:58:51.155528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.127 ms 00:32:57.376 [2024-11-06 13:58:51.155541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.156518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.156553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:57.376 [2024-11-06 13:58:51.156580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.919 ms 00:32:57.376 [2024-11-06 13:58:51.156595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.172729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.172772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:57.376 [2024-11-06 13:58:51.172790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.088 ms 00:32:57.376 [2024-11-06 13:58:51.172811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.181046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.181088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:57.376 [2024-11-06 13:58:51.181105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.189 ms 00:32:57.376 [2024-11-06 13:58:51.181118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.181231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.181249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:57.376 [2024-11-06 13:58:51.181262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:32:57.376 [2024-11-06 13:58:51.181275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.196556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.196728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:57.376 [2024-11-06 13:58:51.196752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.252 ms 00:32:57.376 [2024-11-06 13:58:51.196765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.211938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.212134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:57.376 [2024-11-06 13:58:51.212160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.130 ms 00:32:57.376 [2024-11-06 13:58:51.212173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.227205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.227372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:57.376 [2024-11-06 13:58:51.227396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.987 ms 00:32:57.376 [2024-11-06 13:58:51.227410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.242405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.242584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:57.376 [2024-11-06 13:58:51.242607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.852 ms 00:32:57.376 [2024-11-06 13:58:51.242620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.242707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:57.376 [2024-11-06 13:58:51.242728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:57.376 [2024-11-06 13:58:51.242744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:57.376 [2024-11-06 13:58:51.242758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:57.376 [2024-11-06 13:58:51.242772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:57.376 [2024-11-06 13:58:51.242971] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:57.376 [2024-11-06 13:58:51.242983] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0677a2d7-978c-45f6-8156-1b7732eb1581 00:32:57.376 [2024-11-06 13:58:51.242997] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:57.376 [2024-11-06 13:58:51.243009] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:57.376 [2024-11-06 13:58:51.243037] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:57.376 [2024-11-06 13:58:51.243051] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:57.376 [2024-11-06 13:58:51.243063] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:57.376 [2024-11-06 13:58:51.243077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:57.376 [2024-11-06 13:58:51.243091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:57.376 [2024-11-06 13:58:51.243102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:57.376 [2024-11-06 13:58:51.243113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:57.376 [2024-11-06 13:58:51.243126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.243148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:57.376 [2024-11-06 13:58:51.243164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:32:57.376 [2024-11-06 13:58:51.243177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.264828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.264867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:57.376 [2024-11-06 13:58:51.264883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.610 ms 00:32:57.376 [2024-11-06 13:58:51.264897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.265549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:57.376 [2024-11-06 13:58:51.265571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:57.376 [2024-11-06 13:58:51.265584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.625 ms 00:32:57.376 [2024-11-06 13:58:51.265596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.337383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.376 [2024-11-06 13:58:51.337430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:57.376 [2024-11-06 13:58:51.337446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.376 [2024-11-06 13:58:51.337461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.337513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.376 [2024-11-06 13:58:51.337529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:57.376 [2024-11-06 13:58:51.337543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.376 [2024-11-06 13:58:51.337556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.337693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.376 [2024-11-06 13:58:51.337722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:57.376 [2024-11-06 13:58:51.337735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.376 [2024-11-06 13:58:51.337748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.376 [2024-11-06 13:58:51.337772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.376 [2024-11-06 13:58:51.337793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:57.376 [2024-11-06 13:58:51.337807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.376 [2024-11-06 13:58:51.337819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.481366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.481437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:57.635 [2024-11-06 13:58:51.481457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.481471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.594477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.594570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:57.635 [2024-11-06 13:58:51.594590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.594604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.594771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.594787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:57.635 [2024-11-06 13:58:51.594801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.594814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.594881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.594897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:57.635 [2024-11-06 13:58:51.594919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.594947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.595108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.595126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:57.635 [2024-11-06 13:58:51.595138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.595167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.595220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.595237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:57.635 [2024-11-06 13:58:51.595250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.595270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.595324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.595338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:57.635 [2024-11-06 13:58:51.595350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.595363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.595423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:57.635 [2024-11-06 13:58:51.595438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:57.635 [2024-11-06 13:58:51.595456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:57.635 [2024-11-06 13:58:51.595469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:57.635 [2024-11-06 13:58:51.595634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 446.977 ms, result 0 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:59.537 Remove shared memory files 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81418 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:59.537 ************************************ 00:32:59.537 END TEST ftl_upgrade_shutdown 00:32:59.537 ************************************ 00:32:59.537 00:32:59.537 real 1m32.842s 00:32:59.537 user 2m7.819s 00:32:59.537 sys 0m25.764s 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:59.537 13:58:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:59.537 13:58:53 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:59.537 13:58:53 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:59.537 13:58:53 ftl -- ftl/ftl.sh@14 -- # killprocess 74486 00:32:59.537 13:58:53 ftl -- common/autotest_common.sh@952 -- # '[' -z 74486 ']' 00:32:59.537 13:58:53 ftl -- common/autotest_common.sh@956 -- # kill -0 74486 00:32:59.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74486) - No such process 00:32:59.537 Process with pid 74486 is not found 00:32:59.537 13:58:53 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 74486 is not found' 00:32:59.537 13:58:53 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:59.537 13:58:53 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81900 00:32:59.537 13:58:53 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:59.537 13:58:53 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81900 00:32:59.537 13:58:53 ftl -- common/autotest_common.sh@833 -- # '[' -z 81900 ']' 00:32:59.537 13:58:53 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.537 13:58:53 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:59.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.537 13:58:53 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.538 13:58:53 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:59.538 13:58:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:59.538 [2024-11-06 13:58:53.269684] Starting SPDK v25.01-pre git sha1 40c30569f / DPDK 24.03.0 initialization... 00:32:59.538 [2024-11-06 13:58:53.269866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81900 ] 00:32:59.538 [2024-11-06 13:58:53.462159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.797 [2024-11-06 13:58:53.611109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.731 13:58:54 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:00.731 13:58:54 ftl -- common/autotest_common.sh@866 -- # return 0 00:33:00.731 13:58:54 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:01.298 nvme0n1 00:33:01.298 13:58:55 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:01.298 13:58:55 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:01.298 13:58:55 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:01.298 13:58:55 ftl -- ftl/common.sh@28 -- # stores=e06e090e-3e62-4458-aef1-e77f24ab0d9e 00:33:01.298 13:58:55 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:01.298 13:58:55 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e06e090e-3e62-4458-aef1-e77f24ab0d9e 00:33:01.557 13:58:55 ftl -- ftl/ftl.sh@23 -- # killprocess 81900 00:33:01.557 13:58:55 ftl -- common/autotest_common.sh@952 -- # '[' -z 81900 ']' 00:33:01.557 13:58:55 ftl -- common/autotest_common.sh@956 -- # kill -0 81900 00:33:01.557 13:58:55 ftl -- common/autotest_common.sh@957 -- # uname 00:33:01.557 13:58:55 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:01.557 13:58:55 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81900 00:33:01.816 killing process with pid 81900 00:33:01.816 13:58:55 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:01.816 13:58:55 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:01.816 13:58:55 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81900' 00:33:01.816 13:58:55 ftl -- common/autotest_common.sh@971 -- # kill 81900 00:33:01.816 13:58:55 ftl -- common/autotest_common.sh@976 -- # wait 81900 00:33:04.403 13:58:58 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:04.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:04.661 Waiting for block devices as requested 00:33:04.918 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:04.918 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:04.918 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:05.177 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:10.448 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:10.448 Remove shared memory files 00:33:10.448 13:59:04 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:10.448 13:59:04 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:10.448 13:59:04 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:10.448 13:59:04 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:10.448 13:59:04 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:10.448 13:59:04 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:10.448 13:59:04 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:10.448 ************************************ 00:33:10.448 END TEST ftl 00:33:10.448 ************************************ 00:33:10.448 00:33:10.448 real 10m57.163s 00:33:10.448 user 13m35.534s 00:33:10.448 sys 1m38.896s 00:33:10.448 13:59:04 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:10.448 13:59:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:10.448 13:59:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:10.448 13:59:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:10.448 13:59:04 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:10.448 13:59:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:10.448 13:59:04 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:10.448 13:59:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:10.448 13:59:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:10.448 13:59:04 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:10.448 13:59:04 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:10.448 13:59:04 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:10.448 13:59:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.448 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:33:10.448 13:59:04 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:10.448 13:59:04 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:33:10.448 13:59:04 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:33:10.448 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:33:12.350 INFO: APP EXITING 00:33:12.350 INFO: killing all VMs 00:33:12.350 INFO: killing vhost app 00:33:12.350 INFO: EXIT DONE 00:33:12.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:13.194 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:13.194 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:13.452 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:13.452 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:13.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:14.279 Cleaning 00:33:14.279 Removing: /var/run/dpdk/spdk0/config 00:33:14.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:14.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:14.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:14.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:14.279 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:14.279 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:14.279 Removing: /var/run/dpdk/spdk0 00:33:14.279 Removing: /var/run/dpdk/spdk_pid57671 00:33:14.279 Removing: /var/run/dpdk/spdk_pid57922 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58162 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58272 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58339 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58478 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58496 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58717 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58835 00:33:14.279 Removing: /var/run/dpdk/spdk_pid58953 00:33:14.279 Removing: /var/run/dpdk/spdk_pid59085 00:33:14.279 Removing: /var/run/dpdk/spdk_pid59200 00:33:14.279 Removing: /var/run/dpdk/spdk_pid59239 00:33:14.279 Removing: /var/run/dpdk/spdk_pid59279 00:33:14.279 Removing: /var/run/dpdk/spdk_pid59352 00:33:14.279 Removing: /var/run/dpdk/spdk_pid59469 00:33:14.279 Removing: /var/run/dpdk/spdk_pid59950 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60037 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60122 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60143 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60312 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60335 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60505 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60527 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60607 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60631 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60706 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60729 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60941 00:33:14.279 Removing: /var/run/dpdk/spdk_pid60983 00:33:14.279 Removing: /var/run/dpdk/spdk_pid61071 00:33:14.279 Removing: /var/run/dpdk/spdk_pid61266 00:33:14.279 Removing: /var/run/dpdk/spdk_pid61367 00:33:14.279 Removing: /var/run/dpdk/spdk_pid61415 00:33:14.279 Removing: /var/run/dpdk/spdk_pid61902 00:33:14.279 Removing: /var/run/dpdk/spdk_pid62006 00:33:14.279 Removing: /var/run/dpdk/spdk_pid62120 00:33:14.279 Removing: /var/run/dpdk/spdk_pid62179 00:33:14.279 Removing: /var/run/dpdk/spdk_pid62210 00:33:14.279 Removing: /var/run/dpdk/spdk_pid62294 00:33:14.279 Removing: /var/run/dpdk/spdk_pid62937 00:33:14.279 Removing: /var/run/dpdk/spdk_pid62979 00:33:14.279 Removing: /var/run/dpdk/spdk_pid63498 00:33:14.279 Removing: /var/run/dpdk/spdk_pid63607 00:33:14.279 Removing: /var/run/dpdk/spdk_pid63722 00:33:14.538 Removing: /var/run/dpdk/spdk_pid63780 00:33:14.538 Removing: /var/run/dpdk/spdk_pid63806 00:33:14.538 Removing: /var/run/dpdk/spdk_pid63837 00:33:14.538 Removing: /var/run/dpdk/spdk_pid65744 00:33:14.538 Removing: /var/run/dpdk/spdk_pid65897 00:33:14.538 Removing: /var/run/dpdk/spdk_pid65908 00:33:14.538 Removing: /var/run/dpdk/spdk_pid65922 00:33:14.538 Removing: /var/run/dpdk/spdk_pid65971 00:33:14.538 Removing: /var/run/dpdk/spdk_pid65975 00:33:14.538 Removing: /var/run/dpdk/spdk_pid65987 00:33:14.538 Removing: /var/run/dpdk/spdk_pid66038 00:33:14.538 Removing: /var/run/dpdk/spdk_pid66042 00:33:14.538 Removing: /var/run/dpdk/spdk_pid66060 00:33:14.538 Removing: /var/run/dpdk/spdk_pid66104 00:33:14.538 Removing: /var/run/dpdk/spdk_pid66108 00:33:14.538 Removing: /var/run/dpdk/spdk_pid66126 00:33:14.538 Removing: /var/run/dpdk/spdk_pid67519 00:33:14.538 Removing: /var/run/dpdk/spdk_pid67634 00:33:14.538 Removing: /var/run/dpdk/spdk_pid69061 00:33:14.538 Removing: /var/run/dpdk/spdk_pid70424 00:33:14.538 Removing: /var/run/dpdk/spdk_pid70544 00:33:14.538 Removing: /var/run/dpdk/spdk_pid70665 00:33:14.538 Removing: /var/run/dpdk/spdk_pid70786 00:33:14.538 Removing: /var/run/dpdk/spdk_pid70931 00:33:14.538 Removing: /var/run/dpdk/spdk_pid71011 00:33:14.538 Removing: /var/run/dpdk/spdk_pid71164 00:33:14.538 Removing: /var/run/dpdk/spdk_pid71540 00:33:14.538 Removing: /var/run/dpdk/spdk_pid71582 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72090 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72284 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72388 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72508 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72570 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72601 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72916 00:33:14.538 Removing: /var/run/dpdk/spdk_pid72993 00:33:14.538 Removing: /var/run/dpdk/spdk_pid73084 00:33:14.538 Removing: /var/run/dpdk/spdk_pid73528 00:33:14.538 Removing: /var/run/dpdk/spdk_pid73681 00:33:14.538 Removing: /var/run/dpdk/spdk_pid74486 00:33:14.538 Removing: /var/run/dpdk/spdk_pid74635 00:33:14.538 Removing: /var/run/dpdk/spdk_pid74854 00:33:14.538 Removing: /var/run/dpdk/spdk_pid74958 00:33:14.538 Removing: /var/run/dpdk/spdk_pid75289 00:33:14.538 Removing: /var/run/dpdk/spdk_pid75554 00:33:14.538 Removing: /var/run/dpdk/spdk_pid75917 00:33:14.538 Removing: /var/run/dpdk/spdk_pid76138 00:33:14.538 Removing: /var/run/dpdk/spdk_pid76254 00:33:14.538 Removing: /var/run/dpdk/spdk_pid76333 00:33:14.538 Removing: /var/run/dpdk/spdk_pid76467 00:33:14.538 Removing: /var/run/dpdk/spdk_pid76503 00:33:14.538 Removing: /var/run/dpdk/spdk_pid76578 00:33:14.538 Removing: /var/run/dpdk/spdk_pid76784 00:33:14.538 Removing: /var/run/dpdk/spdk_pid77048 00:33:14.538 Removing: /var/run/dpdk/spdk_pid77429 00:33:14.538 Removing: /var/run/dpdk/spdk_pid77829 00:33:14.538 Removing: /var/run/dpdk/spdk_pid78209 00:33:14.538 Removing: /var/run/dpdk/spdk_pid78675 00:33:14.538 Removing: /var/run/dpdk/spdk_pid78823 00:33:14.538 Removing: /var/run/dpdk/spdk_pid78927 00:33:14.538 Removing: /var/run/dpdk/spdk_pid79549 00:33:14.538 Removing: /var/run/dpdk/spdk_pid79624 00:33:14.538 Removing: /var/run/dpdk/spdk_pid80062 00:33:14.538 Removing: /var/run/dpdk/spdk_pid80392 00:33:14.538 Removing: /var/run/dpdk/spdk_pid80848 00:33:14.538 Removing: /var/run/dpdk/spdk_pid80969 00:33:14.538 Removing: /var/run/dpdk/spdk_pid81024 00:33:14.538 Removing: /var/run/dpdk/spdk_pid81094 00:33:14.538 Removing: /var/run/dpdk/spdk_pid81154 00:33:14.538 Removing: /var/run/dpdk/spdk_pid81218 00:33:14.538 Removing: /var/run/dpdk/spdk_pid81418 00:33:14.797 Removing: /var/run/dpdk/spdk_pid81494 00:33:14.797 Removing: /var/run/dpdk/spdk_pid81561 00:33:14.797 Removing: /var/run/dpdk/spdk_pid81639 00:33:14.797 Removing: /var/run/dpdk/spdk_pid81678 00:33:14.797 Removing: /var/run/dpdk/spdk_pid81769 00:33:14.797 Removing: /var/run/dpdk/spdk_pid81900 00:33:14.797 Clean 00:33:14.797 13:59:08 -- common/autotest_common.sh@1451 -- # return 0 00:33:14.797 13:59:08 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:14.797 13:59:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:14.797 13:59:08 -- common/autotest_common.sh@10 -- # set +x 00:33:14.797 13:59:08 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:14.797 13:59:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:14.797 13:59:08 -- common/autotest_common.sh@10 -- # set +x 00:33:14.797 13:59:08 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:14.797 13:59:08 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:14.797 13:59:08 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:14.797 13:59:08 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:14.797 13:59:08 -- spdk/autotest.sh@394 -- # hostname 00:33:14.797 13:59:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:15.055 geninfo: WARNING: invalid characters removed from testname! 00:33:41.623 13:59:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:44.156 13:59:38 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:46.689 13:59:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:49.266 13:59:43 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:51.800 13:59:45 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:54.337 13:59:47 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:56.930 13:59:50 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:56.930 13:59:50 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:56.930 13:59:50 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:56.930 13:59:50 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:56.930 13:59:50 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:56.930 13:59:50 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:56.930 + [[ -n 5304 ]] 00:33:56.930 + sudo kill 5304 00:33:56.938 [Pipeline] } 00:33:56.953 [Pipeline] // timeout 00:33:56.959 [Pipeline] } 00:33:56.972 [Pipeline] // stage 00:33:56.977 [Pipeline] } 00:33:56.991 [Pipeline] // catchError 00:33:57.001 [Pipeline] stage 00:33:57.003 [Pipeline] { (Stop VM) 00:33:57.013 [Pipeline] sh 00:33:57.293 + vagrant halt 00:34:00.582 ==> default: Halting domain... 00:34:07.166 [Pipeline] sh 00:34:07.448 + vagrant destroy -f 00:34:11.642 ==> default: Removing domain... 00:34:11.654 [Pipeline] sh 00:34:11.937 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:11.945 [Pipeline] } 00:34:11.960 [Pipeline] // stage 00:34:11.965 [Pipeline] } 00:34:11.979 [Pipeline] // dir 00:34:11.985 [Pipeline] } 00:34:11.999 [Pipeline] // wrap 00:34:12.005 [Pipeline] } 00:34:12.018 [Pipeline] // catchError 00:34:12.027 [Pipeline] stage 00:34:12.030 [Pipeline] { (Epilogue) 00:34:12.044 [Pipeline] sh 00:34:12.328 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:20.450 [Pipeline] catchError 00:34:20.452 [Pipeline] { 00:34:20.465 [Pipeline] sh 00:34:20.748 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:21.005 Artifacts sizes are good 00:34:21.013 [Pipeline] } 00:34:21.027 [Pipeline] // catchError 00:34:21.038 [Pipeline] archiveArtifacts 00:34:21.045 Archiving artifacts 00:34:21.189 [Pipeline] cleanWs 00:34:21.201 [WS-CLEANUP] Deleting project workspace... 00:34:21.201 [WS-CLEANUP] Deferred wipeout is used... 00:34:21.208 [WS-CLEANUP] done 00:34:21.210 [Pipeline] } 00:34:21.225 [Pipeline] // stage 00:34:21.231 [Pipeline] } 00:34:21.244 [Pipeline] // node 00:34:21.250 [Pipeline] End of Pipeline 00:34:21.284 Finished: SUCCESS